Feb 24 00:05:34 crc systemd[1]: Starting Kubernetes Kubelet... Feb 24 00:05:34 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 00:05:34 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:35 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:35 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 24 00:05:36 crc kubenswrapper[4824]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 00:05:36 crc kubenswrapper[4824]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 24 00:05:36 crc kubenswrapper[4824]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 00:05:36 crc kubenswrapper[4824]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 00:05:36 crc kubenswrapper[4824]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 24 00:05:36 crc kubenswrapper[4824]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.433853 4824 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.440920 4824 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.440961 4824 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.440973 4824 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.440984 4824 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.440995 4824 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441004 4824 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441018 4824 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441030 4824 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441040 4824 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441051 4824 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441060 4824 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441069 4824 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441077 4824 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441086 4824 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441095 4824 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441106 4824 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441114 4824 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441123 4824 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441132 4824 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441140 4824 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441149 4824 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441158 4824 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441166 4824 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441175 4824 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441183 4824 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441191 4824 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441200 4824 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441209 4824 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441218 4824 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441228 4824 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441236 4824 feature_gate.go:330] unrecognized feature gate: Example Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441245 4824 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441254 4824 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441262 4824 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441270 4824 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441278 4824 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441286 4824 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441295 4824 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441303 4824 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441315 4824 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441326 4824 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441338 4824 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441349 4824 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441359 4824 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441369 4824 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441378 4824 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441389 4824 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441399 4824 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441409 4824 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441420 4824 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441431 4824 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441443 4824 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441454 4824 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441464 4824 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441474 4824 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441485 4824 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441495 4824 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441506 4824 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441544 4824 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441555 4824 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441566 4824 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441583 4824 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441596 4824 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441608 4824 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441618 4824 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441630 4824 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441640 4824 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441657 4824 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441669 4824 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441679 4824 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.441689 4824 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.442967 4824 flags.go:64] FLAG: --address="0.0.0.0" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443008 4824 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443025 4824 flags.go:64] FLAG: --anonymous-auth="true" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443068 4824 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443083 4824 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443095 4824 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443108 4824 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443121 4824 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443132 4824 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443141 4824 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443152 4824 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443162 4824 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443172 4824 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443182 4824 flags.go:64] FLAG: --cgroup-root="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443191 4824 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443201 4824 flags.go:64] FLAG: --client-ca-file="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443211 4824 flags.go:64] FLAG: --cloud-config="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443220 4824 flags.go:64] FLAG: --cloud-provider="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443229 4824 flags.go:64] FLAG: --cluster-dns="[]" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443244 4824 flags.go:64] FLAG: --cluster-domain="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443254 4824 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443265 4824 flags.go:64] FLAG: --config-dir="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443274 4824 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443284 4824 flags.go:64] FLAG: --container-log-max-files="5" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443296 4824 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443306 4824 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443316 4824 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443327 4824 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443336 4824 flags.go:64] FLAG: --contention-profiling="false" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443345 4824 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443356 4824 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443367 4824 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443378 4824 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443390 4824 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443400 4824 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443410 4824 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443420 4824 flags.go:64] FLAG: --enable-load-reader="false" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443429 4824 flags.go:64] FLAG: --enable-server="true" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443439 4824 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443451 4824 flags.go:64] FLAG: --event-burst="100" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443461 4824 flags.go:64] FLAG: --event-qps="50" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443472 4824 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443482 4824 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443491 4824 flags.go:64] FLAG: --eviction-hard="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443511 4824 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443546 4824 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443556 4824 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443567 4824 flags.go:64] FLAG: --eviction-soft="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443577 4824 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443586 4824 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443596 4824 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443605 4824 flags.go:64] FLAG: --experimental-mounter-path="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443615 4824 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443625 4824 flags.go:64] FLAG: --fail-swap-on="true" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443635 4824 flags.go:64] FLAG: --feature-gates="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443646 4824 flags.go:64] FLAG: --file-check-frequency="20s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443656 4824 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443666 4824 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443676 4824 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443686 4824 flags.go:64] FLAG: --healthz-port="10248" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443696 4824 flags.go:64] FLAG: --help="false" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443705 4824 flags.go:64] FLAG: --hostname-override="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443715 4824 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443725 4824 flags.go:64] FLAG: --http-check-frequency="20s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443734 4824 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443744 4824 flags.go:64] FLAG: --image-credential-provider-config="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443753 4824 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443763 4824 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443774 4824 flags.go:64] FLAG: --image-service-endpoint="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443784 4824 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443793 4824 flags.go:64] FLAG: --kube-api-burst="100" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443803 4824 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443813 4824 flags.go:64] FLAG: --kube-api-qps="50" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443822 4824 flags.go:64] FLAG: --kube-reserved="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443832 4824 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443842 4824 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443852 4824 flags.go:64] FLAG: --kubelet-cgroups="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443861 4824 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443871 4824 flags.go:64] FLAG: --lock-file="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443881 4824 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443890 4824 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443900 4824 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443914 4824 flags.go:64] FLAG: --log-json-split-stream="false" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443925 4824 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443934 4824 flags.go:64] FLAG: --log-text-split-stream="false" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443944 4824 flags.go:64] FLAG: --logging-format="text" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443953 4824 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443964 4824 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443974 4824 flags.go:64] FLAG: --manifest-url="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443983 4824 flags.go:64] FLAG: --manifest-url-header="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.443996 4824 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444006 4824 flags.go:64] FLAG: --max-open-files="1000000" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444017 4824 flags.go:64] FLAG: --max-pods="110" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444026 4824 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444036 4824 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444046 4824 flags.go:64] FLAG: --memory-manager-policy="None" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444055 4824 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444065 4824 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444075 4824 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444085 4824 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444107 4824 flags.go:64] FLAG: --node-status-max-images="50" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444117 4824 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444127 4824 flags.go:64] FLAG: --oom-score-adj="-999" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444137 4824 flags.go:64] FLAG: --pod-cidr="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444157 4824 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444171 4824 flags.go:64] FLAG: --pod-manifest-path="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444181 4824 flags.go:64] FLAG: --pod-max-pids="-1" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444191 4824 flags.go:64] FLAG: --pods-per-core="0" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444201 4824 flags.go:64] FLAG: --port="10250" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444212 4824 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444222 4824 flags.go:64] FLAG: --provider-id="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444232 4824 flags.go:64] FLAG: --qos-reserved="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444242 4824 flags.go:64] FLAG: --read-only-port="10255" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444252 4824 flags.go:64] FLAG: --register-node="true" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444261 4824 flags.go:64] FLAG: --register-schedulable="true" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444271 4824 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444286 4824 flags.go:64] FLAG: --registry-burst="10" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444296 4824 flags.go:64] FLAG: --registry-qps="5" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444306 4824 flags.go:64] FLAG: --reserved-cpus="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444315 4824 flags.go:64] FLAG: --reserved-memory="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444329 4824 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444339 4824 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444348 4824 flags.go:64] FLAG: --rotate-certificates="false" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444358 4824 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444368 4824 flags.go:64] FLAG: --runonce="false" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444377 4824 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444400 4824 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444411 4824 flags.go:64] FLAG: --seccomp-default="false" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444587 4824 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444598 4824 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444609 4824 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444619 4824 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444629 4824 flags.go:64] FLAG: --storage-driver-password="root" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444638 4824 flags.go:64] FLAG: --storage-driver-secure="false" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444648 4824 flags.go:64] FLAG: --storage-driver-table="stats" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444658 4824 flags.go:64] FLAG: --storage-driver-user="root" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444668 4824 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444678 4824 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444688 4824 flags.go:64] FLAG: --system-cgroups="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444697 4824 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444717 4824 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444726 4824 flags.go:64] FLAG: --tls-cert-file="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444738 4824 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444751 4824 flags.go:64] FLAG: --tls-min-version="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444761 4824 flags.go:64] FLAG: --tls-private-key-file="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444771 4824 flags.go:64] FLAG: --topology-manager-policy="none" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444781 4824 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444791 4824 flags.go:64] FLAG: --topology-manager-scope="container" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444801 4824 flags.go:64] FLAG: --v="2" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444814 4824 flags.go:64] FLAG: --version="false" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444827 4824 flags.go:64] FLAG: --vmodule="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444838 4824 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.444849 4824 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445134 4824 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445148 4824 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445159 4824 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445172 4824 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445186 4824 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445199 4824 feature_gate.go:330] unrecognized feature gate: Example Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445211 4824 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445222 4824 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445234 4824 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445245 4824 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445255 4824 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445266 4824 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445280 4824 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445293 4824 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445305 4824 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445318 4824 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445329 4824 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445341 4824 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445353 4824 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445365 4824 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445373 4824 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445382 4824 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445390 4824 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445402 4824 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445410 4824 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445419 4824 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445427 4824 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445438 4824 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445449 4824 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445459 4824 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445470 4824 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445478 4824 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445487 4824 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445496 4824 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445505 4824 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445514 4824 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445577 4824 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445596 4824 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445608 4824 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445618 4824 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445627 4824 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445636 4824 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445645 4824 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445654 4824 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445663 4824 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445672 4824 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445680 4824 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445688 4824 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445697 4824 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445705 4824 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445714 4824 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445722 4824 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445731 4824 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445739 4824 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445748 4824 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445756 4824 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445765 4824 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445774 4824 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445782 4824 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445794 4824 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445803 4824 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445812 4824 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445820 4824 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445829 4824 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445837 4824 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445846 4824 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445855 4824 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445864 4824 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445873 4824 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445882 4824 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.445890 4824 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.445921 4824 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.457695 4824 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.457766 4824 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.457891 4824 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.457914 4824 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.457924 4824 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.457937 4824 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.457946 4824 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.457954 4824 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.457962 4824 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.457970 4824 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.457978 4824 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.457987 4824 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.457995 4824 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458003 4824 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458011 4824 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458020 4824 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458028 4824 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458035 4824 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458043 4824 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458054 4824 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458065 4824 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458074 4824 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458083 4824 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458092 4824 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458103 4824 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458114 4824 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458125 4824 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458136 4824 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458150 4824 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458166 4824 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458176 4824 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458188 4824 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458198 4824 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458208 4824 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458217 4824 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458224 4824 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458235 4824 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458243 4824 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458252 4824 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458259 4824 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458267 4824 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458274 4824 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458282 4824 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458289 4824 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458297 4824 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458336 4824 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458344 4824 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458353 4824 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458363 4824 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458374 4824 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458383 4824 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458391 4824 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458400 4824 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458407 4824 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458415 4824 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458423 4824 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458430 4824 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458438 4824 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458446 4824 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458453 4824 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458461 4824 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458469 4824 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458477 4824 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458488 4824 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458498 4824 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458506 4824 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458540 4824 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458549 4824 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458556 4824 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458564 4824 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458571 4824 feature_gate.go:330] unrecognized feature gate: Example Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458579 4824 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458588 4824 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.458602 4824 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458824 4824 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458836 4824 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458845 4824 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458853 4824 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458861 4824 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458869 4824 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458878 4824 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458887 4824 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458898 4824 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458908 4824 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458919 4824 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458928 4824 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458937 4824 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458947 4824 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458957 4824 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458967 4824 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458976 4824 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458986 4824 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.458994 4824 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459003 4824 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459013 4824 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459022 4824 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459032 4824 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459041 4824 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459073 4824 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459083 4824 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459092 4824 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459102 4824 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459111 4824 feature_gate.go:330] unrecognized feature gate: Example Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459122 4824 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459132 4824 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459141 4824 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459151 4824 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459162 4824 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459189 4824 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459204 4824 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459215 4824 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459224 4824 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459234 4824 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459244 4824 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459254 4824 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459265 4824 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459275 4824 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459287 4824 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459300 4824 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459311 4824 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459321 4824 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459334 4824 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459345 4824 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459355 4824 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459368 4824 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459378 4824 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459387 4824 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459397 4824 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459407 4824 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459417 4824 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459426 4824 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459436 4824 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459446 4824 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459456 4824 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459465 4824 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459476 4824 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459486 4824 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459495 4824 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459505 4824 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459548 4824 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459559 4824 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459570 4824 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459580 4824 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459588 4824 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.459599 4824 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.459612 4824 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.460890 4824 server.go:940] "Client rotation is on, will bootstrap in background" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.466704 4824 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.466902 4824 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.469092 4824 server.go:997] "Starting client certificate rotation" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.469153 4824 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.469424 4824 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-04 10:35:59.668298275 +0000 UTC Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.469545 4824 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.498486 4824 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 00:05:36 crc kubenswrapper[4824]: E0224 00:05:36.503248 4824 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.505296 4824 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.520607 4824 log.go:25] "Validated CRI v1 runtime API" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.553907 4824 log.go:25] "Validated CRI v1 image API" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.556358 4824 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.561735 4824 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-24-00-01-03-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.561784 4824 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.588839 4824 manager.go:217] Machine: {Timestamp:2026-02-24 00:05:36.585554392 +0000 UTC m=+0.575178901 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d5e3d68d-d538-4dbe-b3fe-7347ab36b29a BootID:7ea41d01-04ab-44da-af10-993e94777268 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:5d:37:c8 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:5d:37:c8 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b4:f4:28 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:fe:4d:17 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d8:a0:3d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:1c:64:f9 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:7a:6d:28:fa:f8:c1 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:0a:cf:a3:57:88:fb Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.589227 4824 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.589468 4824 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.591070 4824 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.591448 4824 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.591673 4824 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.592085 4824 topology_manager.go:138] "Creating topology manager with none policy" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.592112 4824 container_manager_linux.go:303] "Creating device plugin manager" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.592927 4824 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.592990 4824 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.593338 4824 state_mem.go:36] "Initialized new in-memory state store" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.593553 4824 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.601814 4824 kubelet.go:418] "Attempting to sync node with API server" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.601935 4824 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.602011 4824 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.602050 4824 kubelet.go:324] "Adding apiserver pod source" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.602082 4824 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.610668 4824 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.611158 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:05:36 crc kubenswrapper[4824]: E0224 00:05:36.611270 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.611245 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:05:36 crc kubenswrapper[4824]: E0224 00:05:36.611349 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.611860 4824 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.613249 4824 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.615207 4824 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.615241 4824 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.615252 4824 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.615262 4824 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.615279 4824 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.615290 4824 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.615299 4824 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.615316 4824 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.615329 4824 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.615340 4824 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.615356 4824 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.615367 4824 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.616635 4824 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.617288 4824 server.go:1280] "Started kubelet" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.617782 4824 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.617934 4824 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.618588 4824 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.619266 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:05:36 crc systemd[1]: Started Kubernetes Kubelet. Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.619686 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.620031 4824 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.620655 4824 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.620841 4824 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.620649 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:26:58.16827453 +0000 UTC Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.621072 4824 server.go:460] "Adding debug handlers to kubelet server" Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.621372 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:05:36 crc kubenswrapper[4824]: E0224 00:05:36.621421 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:36 crc kubenswrapper[4824]: E0224 00:05:36.621384 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="200ms" Feb 24 00:05:36 crc kubenswrapper[4824]: E0224 00:05:36.621564 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.620908 4824 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.622191 4824 factory.go:55] Registering systemd factory Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.622218 4824 factory.go:221] Registration of the systemd container factory successfully Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.623428 4824 factory.go:153] Registering CRI-O factory Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.623472 4824 factory.go:221] Registration of the crio container factory successfully Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.623730 4824 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.623819 4824 factory.go:103] Registering Raw factory Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.623841 4824 manager.go:1196] Started watching for new ooms in manager Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.626768 4824 manager.go:319] Starting recovery of all containers Feb 24 00:05:36 crc kubenswrapper[4824]: E0224 00:05:36.636297 4824 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.151:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189705f6f4979d2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:05:36.61725009 +0000 UTC m=+0.606874579,LastTimestamp:2026-02-24 00:05:36.61725009 +0000 UTC m=+0.606874579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647587 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647700 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647719 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647732 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647751 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647765 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647780 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647793 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647811 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647827 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647842 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647856 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647874 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647892 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647905 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647924 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647936 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647950 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647965 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647978 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.647991 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648011 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648024 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648038 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648055 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648070 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648084 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648134 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648153 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648167 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648181 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648199 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648212 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648226 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648239 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648253 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648271 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648286 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648298 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648313 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648327 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648339 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648353 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648392 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648408 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648424 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648444 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648460 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648476 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648490 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648503 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648540 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648561 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648575 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648590 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648606 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648618 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648630 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648643 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648659 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648673 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648686 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648713 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648725 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648738 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648750 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648764 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648778 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648792 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648809 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648825 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648837 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648849 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648861 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648873 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648890 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648904 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648931 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648947 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648960 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648977 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.648988 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649001 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649013 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649025 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649040 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649054 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649069 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649081 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649095 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649108 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649120 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649134 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649151 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649168 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649181 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649193 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649206 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649218 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649230 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649241 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649254 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649267 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649343 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649364 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649379 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649392 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649403 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649423 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649436 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649451 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649465 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649481 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649495 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649509 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649542 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649557 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649568 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649581 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649594 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649609 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649619 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649630 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649638 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649648 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649657 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649668 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649680 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649693 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649710 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649730 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649743 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649755 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649767 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649781 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649793 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649809 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649819 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649833 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649845 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649857 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649868 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649889 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649902 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649924 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649936 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649948 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649961 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649975 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.649987 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650002 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650015 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650025 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650038 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650049 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650063 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650074 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650087 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650099 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650112 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650126 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650138 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650150 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650164 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650176 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650189 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650201 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650213 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650230 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650243 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650255 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650266 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650277 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650288 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650307 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650320 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650334 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650348 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650361 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650373 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650384 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650400 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650411 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650497 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650557 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650571 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650583 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650593 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650605 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650616 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650631 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650643 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650655 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650673 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650684 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650699 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650713 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650728 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650749 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650762 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650773 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650787 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650798 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650812 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650825 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650842 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.650861 4824 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.652956 4824 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.652982 4824 reconstruct.go:97] "Volume reconstruction finished" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.652990 4824 reconciler.go:26] "Reconciler: start to sync state" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.655927 4824 manager.go:324] Recovery completed Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.665502 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.666892 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.666935 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.666948 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.667965 4824 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.667983 4824 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.668003 4824 state_mem.go:36] "Initialized new in-memory state store" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.689598 4824 policy_none.go:49] "None policy: Start" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.690498 4824 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.691234 4824 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.691272 4824 state_mem.go:35] "Initializing new in-memory state store" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.692412 4824 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.692480 4824 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.692527 4824 kubelet.go:2335] "Starting kubelet main sync loop" Feb 24 00:05:36 crc kubenswrapper[4824]: E0224 00:05:36.692706 4824 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 24 00:05:36 crc kubenswrapper[4824]: W0224 00:05:36.693346 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:05:36 crc kubenswrapper[4824]: E0224 00:05:36.693391 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:36 crc kubenswrapper[4824]: E0224 00:05:36.722039 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.741558 4824 manager.go:334] "Starting Device Plugin manager" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.741617 4824 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.741633 4824 server.go:79] "Starting device plugin registration server" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.742193 4824 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.742210 4824 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.742723 4824 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.742840 4824 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.742849 4824 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 24 00:05:36 crc kubenswrapper[4824]: E0224 00:05:36.751655 4824 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.793488 4824 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.793917 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.795429 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.795486 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.795501 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.795715 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.796619 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.796696 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.796713 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.796702 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.796721 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.797093 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.797234 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.797269 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.797968 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.797994 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.798001 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.798021 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.798011 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.798148 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.798398 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.798455 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.798467 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.798478 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.798486 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.798494 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.800270 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.800309 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.800324 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.800373 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.800388 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.800401 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.800902 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.801532 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.801578 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.802186 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.802219 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.802475 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.802548 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.802560 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.802852 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.802871 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.803096 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.805206 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.805282 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.805335 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:36 crc kubenswrapper[4824]: E0224 00:05:36.823052 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="400ms" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.842990 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.844225 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.844263 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.844273 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.844300 4824 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:05:36 crc kubenswrapper[4824]: E0224 00:05:36.844913 4824 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.856014 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.856048 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.856067 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.856084 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.856101 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.856117 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.856132 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.856147 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.856162 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.856178 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.856222 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.856260 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.856286 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.856329 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.856360 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957065 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957101 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957127 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957143 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957157 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957170 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957185 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957206 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957220 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957235 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957250 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957267 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957280 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957293 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957306 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957320 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957740 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957800 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957828 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957859 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957867 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957881 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957893 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957903 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957909 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957924 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957931 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957948 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957953 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:36 crc kubenswrapper[4824]: I0224 00:05:36.957970 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.045319 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.047271 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.047457 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.047552 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.047638 4824 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:05:37 crc kubenswrapper[4824]: E0224 00:05:37.048225 4824 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.127020 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.133577 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.148865 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.171838 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 24 00:05:37 crc kubenswrapper[4824]: W0224 00:05:37.173003 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-87ac35aea0d265e4bd6e078ffe3fba67fca63055d00970d942679bd5ceeb8229 WatchSource:0}: Error finding container 87ac35aea0d265e4bd6e078ffe3fba67fca63055d00970d942679bd5ceeb8229: Status 404 returned error can't find the container with id 87ac35aea0d265e4bd6e078ffe3fba67fca63055d00970d942679bd5ceeb8229 Feb 24 00:05:37 crc kubenswrapper[4824]: W0224 00:05:37.174315 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-101334578a872cced90036ea87b59ea7c239365376bb0b884d50dfc8c3e78821 WatchSource:0}: Error finding container 101334578a872cced90036ea87b59ea7c239365376bb0b884d50dfc8c3e78821: Status 404 returned error can't find the container with id 101334578a872cced90036ea87b59ea7c239365376bb0b884d50dfc8c3e78821 Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.175925 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:37 crc kubenswrapper[4824]: W0224 00:05:37.178399 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-855ced46711932315ddb66ea27f3b3c52c001f44f0187aa2812aa58bac0aaeb5 WatchSource:0}: Error finding container 855ced46711932315ddb66ea27f3b3c52c001f44f0187aa2812aa58bac0aaeb5: Status 404 returned error can't find the container with id 855ced46711932315ddb66ea27f3b3c52c001f44f0187aa2812aa58bac0aaeb5 Feb 24 00:05:37 crc kubenswrapper[4824]: W0224 00:05:37.189055 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-bbabcc68576398a5364c21ce47548bedd20424867b6729e5ddf255c8171ab1b3 WatchSource:0}: Error finding container bbabcc68576398a5364c21ce47548bedd20424867b6729e5ddf255c8171ab1b3: Status 404 returned error can't find the container with id bbabcc68576398a5364c21ce47548bedd20424867b6729e5ddf255c8171ab1b3 Feb 24 00:05:37 crc kubenswrapper[4824]: W0224 00:05:37.197480 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a92f116098ea86db8d92f15dd3578c2fe61f9b3c6fcd3ce7716ceed9c2911a11 WatchSource:0}: Error finding container a92f116098ea86db8d92f15dd3578c2fe61f9b3c6fcd3ce7716ceed9c2911a11: Status 404 returned error can't find the container with id a92f116098ea86db8d92f15dd3578c2fe61f9b3c6fcd3ce7716ceed9c2911a11 Feb 24 00:05:37 crc kubenswrapper[4824]: E0224 00:05:37.223980 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="800ms" Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.448580 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.450786 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.450857 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.450869 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.450901 4824 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:05:37 crc kubenswrapper[4824]: E0224 00:05:37.451670 4824 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Feb 24 00:05:37 crc kubenswrapper[4824]: W0224 00:05:37.571121 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:05:37 crc kubenswrapper[4824]: E0224 00:05:37.571246 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.621268 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 07:48:49.420678712 +0000 UTC Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.621668 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.696503 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a92f116098ea86db8d92f15dd3578c2fe61f9b3c6fcd3ce7716ceed9c2911a11"} Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.697724 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bbabcc68576398a5364c21ce47548bedd20424867b6729e5ddf255c8171ab1b3"} Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.699460 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"855ced46711932315ddb66ea27f3b3c52c001f44f0187aa2812aa58bac0aaeb5"} Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.700278 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"101334578a872cced90036ea87b59ea7c239365376bb0b884d50dfc8c3e78821"} Feb 24 00:05:37 crc kubenswrapper[4824]: I0224 00:05:37.701230 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"87ac35aea0d265e4bd6e078ffe3fba67fca63055d00970d942679bd5ceeb8229"} Feb 24 00:05:38 crc kubenswrapper[4824]: W0224 00:05:38.001165 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:05:38 crc kubenswrapper[4824]: E0224 00:05:38.001734 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:38 crc kubenswrapper[4824]: E0224 00:05:38.025805 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="1.6s" Feb 24 00:05:38 crc kubenswrapper[4824]: W0224 00:05:38.049605 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:05:38 crc kubenswrapper[4824]: E0224 00:05:38.049696 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:38 crc kubenswrapper[4824]: W0224 00:05:38.230292 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:05:38 crc kubenswrapper[4824]: E0224 00:05:38.230398 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.252724 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.254242 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.254284 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.254299 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.254362 4824 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:05:38 crc kubenswrapper[4824]: E0224 00:05:38.254882 4824 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.549812 4824 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 00:05:38 crc kubenswrapper[4824]: E0224 00:05:38.551443 4824 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.621094 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.623246 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 01:43:37.45521017 +0000 UTC Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.705934 4824 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a" exitCode=0 Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.706011 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a"} Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.706140 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.707290 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.707331 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.707346 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.709159 4824 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ac9c6e044265812e8e2116a81efcc38023cc8da2b92093e3f75c6efa4e359c4f" exitCode=0 Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.709229 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ac9c6e044265812e8e2116a81efcc38023cc8da2b92093e3f75c6efa4e359c4f"} Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.709265 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.709303 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.710339 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.710369 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.710380 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.710390 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.710430 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.710453 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.711281 4824 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="370e38e80ea6eb0a1add2e84d515cef4e8aeb77dc43beba4d0c3fdeb871f4fb3" exitCode=0 Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.711441 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.711742 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"370e38e80ea6eb0a1add2e84d515cef4e8aeb77dc43beba4d0c3fdeb871f4fb3"} Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.713181 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.713214 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.713230 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.716642 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"213ad3b341ace3f4473aa48ecaaa41814fe670417431f6d5cd04be03482e597c"} Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.716680 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4bf86745569172505df6632421bd1587317cb06e26e70937563bd7d0341c6086"} Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.716691 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ef63a3a20052bbda09997002dbbce1fd4cdf577f00711857db86b460ed4e8165"} Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.716702 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"45746da193a3aaa64e64067e2324afb4e4fb1a90a1357461ed97c0bbb5109f5b"} Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.716778 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.718184 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.718243 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.718271 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.719121 4824 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2" exitCode=0 Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.719153 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2"} Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.719287 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.720376 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.720432 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:38 crc kubenswrapper[4824]: I0224 00:05:38.720459 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:39 crc kubenswrapper[4824]: E0224 00:05:39.250005 4824 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.151:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189705f6f4979d2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:05:36.61725009 +0000 UTC m=+0.606874579,LastTimestamp:2026-02-24 00:05:36.61725009 +0000 UTC m=+0.606874579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.621534 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.624001 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:00:08.678846863 +0000 UTC Feb 24 00:05:39 crc kubenswrapper[4824]: E0224 00:05:39.627160 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="3.2s" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.727185 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2"} Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.727319 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc"} Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.727337 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb"} Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.727353 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922"} Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.731221 4824 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d404a30b7d0515ae872dab9ec3e60629de532d6f27ab0c9f1e7e3918bad30e60" exitCode=0 Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.731409 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.731409 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d404a30b7d0515ae872dab9ec3e60629de532d6f27ab0c9f1e7e3918bad30e60"} Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.732417 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.732456 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.732470 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.734054 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.734040 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fdd198d5e4adeaa8e23f5262bc17476b82b1c76b3bfd06b385590eede8c0baa5"} Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.735045 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.735141 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.735170 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.737827 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"52d213e230e92cadfb65d0efd8b4dea058e9f947897ad48c9c9d123ed37263ae"} Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.737894 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.737919 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"357f08b7a73fd26e3a3c8232ff5641ec70da20551310ce9348d3b1462a30735c"} Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.737945 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a93bd599f0dec3d074b172aab55b3b74283b37d4dddc2e6a97b7319e8d7bd15b"} Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.737841 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.739015 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.739045 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.739059 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.739270 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.739309 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.739349 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.855414 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.857261 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.857313 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.857333 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:39 crc kubenswrapper[4824]: I0224 00:05:39.857369 4824 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:05:39 crc kubenswrapper[4824]: E0224 00:05:39.857939 4824 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Feb 24 00:05:40 crc kubenswrapper[4824]: W0224 00:05:40.323066 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:05:40 crc kubenswrapper[4824]: E0224 00:05:40.323389 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:40 crc kubenswrapper[4824]: W0224 00:05:40.444028 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:05:40 crc kubenswrapper[4824]: E0224 00:05:40.444200 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.521130 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.621106 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.624399 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 08:22:03.243102605 +0000 UTC Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.742008 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.747777 4824 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e58d6fa1f448ca33cfbeb0873a4f6698f83f676348dda39a279ad793fce7ced3" exitCode=255 Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.747888 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e58d6fa1f448ca33cfbeb0873a4f6698f83f676348dda39a279ad793fce7ced3"} Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.747921 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.749325 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.749383 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.749404 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.750136 4824 scope.go:117] "RemoveContainer" containerID="e58d6fa1f448ca33cfbeb0873a4f6698f83f676348dda39a279ad793fce7ced3" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.751486 4824 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="513b48315443b87dc828225545ae02e1f47580596fc31a7935dd59b80065d26c" exitCode=0 Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.751586 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"513b48315443b87dc828225545ae02e1f47580596fc31a7935dd59b80065d26c"} Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.751662 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.751734 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.751776 4824 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.751856 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.751786 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.752622 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.752668 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.752685 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.753409 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.753452 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.753425 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.753470 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.753543 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.753558 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.753571 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.753588 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:40 crc kubenswrapper[4824]: I0224 00:05:40.753604 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:41 crc kubenswrapper[4824]: I0224 00:05:41.376744 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:41 crc kubenswrapper[4824]: I0224 00:05:41.624746 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:42:33.450466584 +0000 UTC Feb 24 00:05:41 crc kubenswrapper[4824]: I0224 00:05:41.759427 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a5ce8e051c69710768e7b4fba06cc91026ce4d25baf35cd8f9235bfd348a4451"} Feb 24 00:05:41 crc kubenswrapper[4824]: I0224 00:05:41.759499 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7244717b123b325ac83f68b976c1e5761a76b1aacac6aab471d0b644386251d6"} Feb 24 00:05:41 crc kubenswrapper[4824]: I0224 00:05:41.759554 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"65bbd80285850cb217ec0994ab8841efb660fd2488077ee4968b0c1f5d156fbe"} Feb 24 00:05:41 crc kubenswrapper[4824]: I0224 00:05:41.759578 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"50ede0889b30698004a8f288b9cc1bb7d00b194c21ff1936b2743f1e7f246e6f"} Feb 24 00:05:41 crc kubenswrapper[4824]: I0224 00:05:41.761586 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 24 00:05:41 crc kubenswrapper[4824]: I0224 00:05:41.763581 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"becc3a99f880114ef1a12f111c21365920ead17ffb3ba93936683ea411b50486"} Feb 24 00:05:41 crc kubenswrapper[4824]: I0224 00:05:41.763749 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:41 crc kubenswrapper[4824]: I0224 00:05:41.763810 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:41 crc kubenswrapper[4824]: I0224 00:05:41.764712 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:41 crc kubenswrapper[4824]: I0224 00:05:41.764741 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:41 crc kubenswrapper[4824]: I0224 00:05:41.764752 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:42 crc kubenswrapper[4824]: I0224 00:05:42.625561 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 23:13:58.191883148 +0000 UTC Feb 24 00:05:42 crc kubenswrapper[4824]: I0224 00:05:42.733621 4824 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 00:05:42 crc kubenswrapper[4824]: I0224 00:05:42.777173 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:42 crc kubenswrapper[4824]: I0224 00:05:42.777198 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"837b5de339793bc363199eae6f141038bf069b2832f6375a8e01d49bef7ea63d"} Feb 24 00:05:42 crc kubenswrapper[4824]: I0224 00:05:42.777277 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:42 crc kubenswrapper[4824]: I0224 00:05:42.777179 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:42 crc kubenswrapper[4824]: I0224 00:05:42.778459 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:42 crc kubenswrapper[4824]: I0224 00:05:42.778502 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:42 crc kubenswrapper[4824]: I0224 00:05:42.778532 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:42 crc kubenswrapper[4824]: I0224 00:05:42.778616 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:42 crc kubenswrapper[4824]: I0224 00:05:42.778678 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:42 crc kubenswrapper[4824]: I0224 00:05:42.778728 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:43 crc kubenswrapper[4824]: I0224 00:05:43.058313 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:43 crc kubenswrapper[4824]: I0224 00:05:43.060263 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:43 crc kubenswrapper[4824]: I0224 00:05:43.060321 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:43 crc kubenswrapper[4824]: I0224 00:05:43.060335 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:43 crc kubenswrapper[4824]: I0224 00:05:43.060373 4824 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:05:43 crc kubenswrapper[4824]: I0224 00:05:43.521616 4824 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 00:05:43 crc kubenswrapper[4824]: I0224 00:05:43.521823 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 00:05:43 crc kubenswrapper[4824]: I0224 00:05:43.626013 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 20:35:33.602937106 +0000 UTC Feb 24 00:05:43 crc kubenswrapper[4824]: I0224 00:05:43.779857 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:43 crc kubenswrapper[4824]: I0224 00:05:43.779927 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:43 crc kubenswrapper[4824]: I0224 00:05:43.781336 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:43 crc kubenswrapper[4824]: I0224 00:05:43.781373 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:43 crc kubenswrapper[4824]: I0224 00:05:43.781384 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:43 crc kubenswrapper[4824]: I0224 00:05:43.781467 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:43 crc kubenswrapper[4824]: I0224 00:05:43.781577 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:43 crc kubenswrapper[4824]: I0224 00:05:43.781609 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:44 crc kubenswrapper[4824]: I0224 00:05:44.171240 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:44 crc kubenswrapper[4824]: I0224 00:05:44.626677 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 06:43:10.373928468 +0000 UTC Feb 24 00:05:44 crc kubenswrapper[4824]: I0224 00:05:44.680081 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:44 crc kubenswrapper[4824]: I0224 00:05:44.680343 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:44 crc kubenswrapper[4824]: I0224 00:05:44.682035 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:44 crc kubenswrapper[4824]: I0224 00:05:44.682077 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:44 crc kubenswrapper[4824]: I0224 00:05:44.682090 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:44 crc kubenswrapper[4824]: I0224 00:05:44.778828 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4824]: I0224 00:05:44.782634 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:44 crc kubenswrapper[4824]: I0224 00:05:44.782645 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:44 crc kubenswrapper[4824]: I0224 00:05:44.784464 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:44 crc kubenswrapper[4824]: I0224 00:05:44.784462 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:44 crc kubenswrapper[4824]: I0224 00:05:44.784507 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:44 crc kubenswrapper[4824]: I0224 00:05:44.784558 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:44 crc kubenswrapper[4824]: I0224 00:05:44.784570 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:44 crc kubenswrapper[4824]: I0224 00:05:44.784580 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:45 crc kubenswrapper[4824]: I0224 00:05:45.015800 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 24 00:05:45 crc kubenswrapper[4824]: I0224 00:05:45.035135 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:05:45 crc kubenswrapper[4824]: I0224 00:05:45.035431 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:45 crc kubenswrapper[4824]: I0224 00:05:45.037200 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:45 crc kubenswrapper[4824]: I0224 00:05:45.037264 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:45 crc kubenswrapper[4824]: I0224 00:05:45.037284 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:45 crc kubenswrapper[4824]: I0224 00:05:45.627903 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:18:19.614453227 +0000 UTC Feb 24 00:05:45 crc kubenswrapper[4824]: I0224 00:05:45.786363 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:45 crc kubenswrapper[4824]: I0224 00:05:45.787323 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:45 crc kubenswrapper[4824]: I0224 00:05:45.787353 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:45 crc kubenswrapper[4824]: I0224 00:05:45.787362 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:46 crc kubenswrapper[4824]: I0224 00:05:46.533555 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:46 crc kubenswrapper[4824]: I0224 00:05:46.533778 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:46 crc kubenswrapper[4824]: I0224 00:05:46.534964 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:46 crc kubenswrapper[4824]: I0224 00:05:46.534990 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:46 crc kubenswrapper[4824]: I0224 00:05:46.535000 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:46 crc kubenswrapper[4824]: I0224 00:05:46.628405 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:15:00.714139094 +0000 UTC Feb 24 00:05:46 crc kubenswrapper[4824]: E0224 00:05:46.751875 4824 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:05:47 crc kubenswrapper[4824]: I0224 00:05:47.629331 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 01:41:07.822292395 +0000 UTC Feb 24 00:05:48 crc kubenswrapper[4824]: I0224 00:05:48.031057 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:48 crc kubenswrapper[4824]: I0224 00:05:48.031315 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:48 crc kubenswrapper[4824]: I0224 00:05:48.033120 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:48 crc kubenswrapper[4824]: I0224 00:05:48.033182 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:48 crc kubenswrapper[4824]: I0224 00:05:48.033198 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:48 crc kubenswrapper[4824]: I0224 00:05:48.037983 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:48 crc kubenswrapper[4824]: I0224 00:05:48.630112 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 11:27:46.974315946 +0000 UTC Feb 24 00:05:48 crc kubenswrapper[4824]: I0224 00:05:48.794084 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:48 crc kubenswrapper[4824]: I0224 00:05:48.795348 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:48 crc kubenswrapper[4824]: I0224 00:05:48.795394 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:48 crc kubenswrapper[4824]: I0224 00:05:48.795410 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:48 crc kubenswrapper[4824]: I0224 00:05:48.798803 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:49 crc kubenswrapper[4824]: I0224 00:05:49.631082 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:34:14.553860028 +0000 UTC Feb 24 00:05:49 crc kubenswrapper[4824]: I0224 00:05:49.796446 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:49 crc kubenswrapper[4824]: I0224 00:05:49.798119 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:49 crc kubenswrapper[4824]: I0224 00:05:49.798162 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:49 crc kubenswrapper[4824]: I0224 00:05:49.798171 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:50 crc kubenswrapper[4824]: I0224 00:05:50.357437 4824 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 24 00:05:50 crc kubenswrapper[4824]: I0224 00:05:50.357584 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 24 00:05:50 crc kubenswrapper[4824]: I0224 00:05:50.632076 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 08:24:13.826914982 +0000 UTC Feb 24 00:05:50 crc kubenswrapper[4824]: W0224 00:05:50.728143 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 24 00:05:50 crc kubenswrapper[4824]: I0224 00:05:50.728544 4824 trace.go:236] Trace[1574236571]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Feb-2026 00:05:40.727) (total time: 10001ms): Feb 24 00:05:50 crc kubenswrapper[4824]: Trace[1574236571]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (00:05:50.728) Feb 24 00:05:50 crc kubenswrapper[4824]: Trace[1574236571]: [10.00126959s] [10.00126959s] END Feb 24 00:05:50 crc kubenswrapper[4824]: E0224 00:05:50.728691 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 24 00:05:50 crc kubenswrapper[4824]: W0224 00:05:50.841466 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 24 00:05:50 crc kubenswrapper[4824]: I0224 00:05:50.841851 4824 trace.go:236] Trace[179095445]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Feb-2026 00:05:40.839) (total time: 10001ms): Feb 24 00:05:50 crc kubenswrapper[4824]: Trace[179095445]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:05:50.841) Feb 24 00:05:50 crc kubenswrapper[4824]: Trace[179095445]: [10.001958436s] [10.001958436s] END Feb 24 00:05:50 crc kubenswrapper[4824]: E0224 00:05:50.842000 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 24 00:05:51 crc kubenswrapper[4824]: E0224 00:05:51.338860 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:51Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 24 00:05:51 crc kubenswrapper[4824]: E0224 00:05:51.340854 4824 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:51Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 00:05:51 crc kubenswrapper[4824]: W0224 00:05:51.344332 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:51Z is after 2026-02-23T05:33:13Z Feb 24 00:05:51 crc kubenswrapper[4824]: E0224 00:05:51.344502 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:51Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:05:51 crc kubenswrapper[4824]: W0224 00:05:51.345803 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:51Z is after 2026-02-23T05:33:13Z Feb 24 00:05:51 crc kubenswrapper[4824]: E0224 00:05:51.345855 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:51Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:05:51 crc kubenswrapper[4824]: E0224 00:05:51.354341 4824 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:51Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:05:51 crc kubenswrapper[4824]: E0224 00:05:51.354923 4824 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:51Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189705f6f4979d2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:05:36.61725009 +0000 UTC m=+0.606874579,LastTimestamp:2026-02-24 00:05:36.61725009 +0000 UTC m=+0.606874579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.356008 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:51Z is after 2026-02-23T05:33:13Z Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.359404 4824 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.359692 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.363431 4824 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.363560 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.394170 4824 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]log ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]etcd ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/generic-apiserver-start-informers ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/priority-and-fairness-filter ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/start-apiextensions-informers ok Feb 24 00:05:51 crc kubenswrapper[4824]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 24 00:05:51 crc kubenswrapper[4824]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/start-system-namespaces-controller ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 24 00:05:51 crc kubenswrapper[4824]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 24 00:05:51 crc kubenswrapper[4824]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 24 00:05:51 crc kubenswrapper[4824]: [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld Feb 24 00:05:51 crc kubenswrapper[4824]: [-]poststarthook/bootstrap-controller failed: reason withheld Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/start-kube-aggregator-informers ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 24 00:05:51 crc kubenswrapper[4824]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 24 00:05:51 crc kubenswrapper[4824]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]autoregister-completion ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/apiservice-openapi-controller ok Feb 24 00:05:51 crc kubenswrapper[4824]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 24 00:05:51 crc kubenswrapper[4824]: livez check failed Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.394259 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.629233 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:51Z is after 2026-02-23T05:33:13Z Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.634020 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 14:53:19.560532553 +0000 UTC Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.803077 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.803512 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.805179 4824 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="becc3a99f880114ef1a12f111c21365920ead17ffb3ba93936683ea411b50486" exitCode=255 Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.805223 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"becc3a99f880114ef1a12f111c21365920ead17ffb3ba93936683ea411b50486"} Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.805272 4824 scope.go:117] "RemoveContainer" containerID="e58d6fa1f448ca33cfbeb0873a4f6698f83f676348dda39a279ad793fce7ced3" Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.805414 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.806573 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.806598 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.806609 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:51 crc kubenswrapper[4824]: I0224 00:05:51.807077 4824 scope.go:117] "RemoveContainer" containerID="becc3a99f880114ef1a12f111c21365920ead17ffb3ba93936683ea411b50486" Feb 24 00:05:51 crc kubenswrapper[4824]: E0224 00:05:51.807320 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:05:52 crc kubenswrapper[4824]: I0224 00:05:52.623355 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:52Z is after 2026-02-23T05:33:13Z Feb 24 00:05:52 crc kubenswrapper[4824]: I0224 00:05:52.634634 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:26:58.995006326 +0000 UTC Feb 24 00:05:52 crc kubenswrapper[4824]: I0224 00:05:52.810118 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 00:05:53 crc kubenswrapper[4824]: I0224 00:05:53.523076 4824 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 00:05:53 crc kubenswrapper[4824]: I0224 00:05:53.523210 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 00:05:53 crc kubenswrapper[4824]: I0224 00:05:53.626977 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:53Z is after 2026-02-23T05:33:13Z Feb 24 00:05:53 crc kubenswrapper[4824]: I0224 00:05:53.635226 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:27:15.479684268 +0000 UTC Feb 24 00:05:54 crc kubenswrapper[4824]: I0224 00:05:54.624745 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:54Z is after 2026-02-23T05:33:13Z Feb 24 00:05:54 crc kubenswrapper[4824]: I0224 00:05:54.635969 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 01:50:11.856617984 +0000 UTC Feb 24 00:05:55 crc kubenswrapper[4824]: I0224 00:05:55.048094 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 24 00:05:55 crc kubenswrapper[4824]: I0224 00:05:55.048348 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:55 crc kubenswrapper[4824]: I0224 00:05:55.049741 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:55 crc kubenswrapper[4824]: I0224 00:05:55.049792 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:55 crc kubenswrapper[4824]: I0224 00:05:55.049806 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:55 crc kubenswrapper[4824]: I0224 00:05:55.061893 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 24 00:05:55 crc kubenswrapper[4824]: W0224 00:05:55.347720 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:55Z is after 2026-02-23T05:33:13Z Feb 24 00:05:55 crc kubenswrapper[4824]: E0224 00:05:55.347860 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:55Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:05:55 crc kubenswrapper[4824]: I0224 00:05:55.623232 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:55Z is after 2026-02-23T05:33:13Z Feb 24 00:05:55 crc kubenswrapper[4824]: I0224 00:05:55.636576 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 16:40:10.504136442 +0000 UTC Feb 24 00:05:55 crc kubenswrapper[4824]: W0224 00:05:55.727108 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:55Z is after 2026-02-23T05:33:13Z Feb 24 00:05:55 crc kubenswrapper[4824]: E0224 00:05:55.727249 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:55Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:05:55 crc kubenswrapper[4824]: I0224 00:05:55.820119 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:55 crc kubenswrapper[4824]: I0224 00:05:55.821347 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:55 crc kubenswrapper[4824]: I0224 00:05:55.821425 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:55 crc kubenswrapper[4824]: I0224 00:05:55.821450 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:56 crc kubenswrapper[4824]: I0224 00:05:56.383419 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:56 crc kubenswrapper[4824]: I0224 00:05:56.383744 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:56 crc kubenswrapper[4824]: I0224 00:05:56.385620 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:56 crc kubenswrapper[4824]: I0224 00:05:56.385667 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:56 crc kubenswrapper[4824]: I0224 00:05:56.385679 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:56 crc kubenswrapper[4824]: I0224 00:05:56.386372 4824 scope.go:117] "RemoveContainer" containerID="becc3a99f880114ef1a12f111c21365920ead17ffb3ba93936683ea411b50486" Feb 24 00:05:56 crc kubenswrapper[4824]: E0224 00:05:56.386616 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:05:56 crc kubenswrapper[4824]: I0224 00:05:56.392451 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:56 crc kubenswrapper[4824]: I0224 00:05:56.624962 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:56Z is after 2026-02-23T05:33:13Z Feb 24 00:05:56 crc kubenswrapper[4824]: I0224 00:05:56.637326 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 03:30:41.489689839 +0000 UTC Feb 24 00:05:56 crc kubenswrapper[4824]: E0224 00:05:56.752171 4824 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:05:56 crc kubenswrapper[4824]: I0224 00:05:56.822096 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:56 crc kubenswrapper[4824]: I0224 00:05:56.824128 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:56 crc kubenswrapper[4824]: I0224 00:05:56.824193 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:56 crc kubenswrapper[4824]: I0224 00:05:56.824214 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:56 crc kubenswrapper[4824]: I0224 00:05:56.825071 4824 scope.go:117] "RemoveContainer" containerID="becc3a99f880114ef1a12f111c21365920ead17ffb3ba93936683ea411b50486" Feb 24 00:05:56 crc kubenswrapper[4824]: E0224 00:05:56.825379 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:05:57 crc kubenswrapper[4824]: I0224 00:05:57.571887 4824 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:57 crc kubenswrapper[4824]: I0224 00:05:57.623288 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:57Z is after 2026-02-23T05:33:13Z Feb 24 00:05:57 crc kubenswrapper[4824]: I0224 00:05:57.637728 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 01:33:16.178920795 +0000 UTC Feb 24 00:05:57 crc kubenswrapper[4824]: I0224 00:05:57.740959 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:57 crc kubenswrapper[4824]: I0224 00:05:57.742926 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:57 crc kubenswrapper[4824]: I0224 00:05:57.743000 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:57 crc kubenswrapper[4824]: I0224 00:05:57.743028 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:57 crc kubenswrapper[4824]: I0224 00:05:57.743112 4824 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:05:57 crc kubenswrapper[4824]: E0224 00:05:57.747045 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:57Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 24 00:05:57 crc kubenswrapper[4824]: E0224 00:05:57.750392 4824 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:57Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 00:05:57 crc kubenswrapper[4824]: I0224 00:05:57.825655 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:57 crc kubenswrapper[4824]: I0224 00:05:57.827129 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:57 crc kubenswrapper[4824]: I0224 00:05:57.827198 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:57 crc kubenswrapper[4824]: I0224 00:05:57.827214 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:57 crc kubenswrapper[4824]: I0224 00:05:57.828058 4824 scope.go:117] "RemoveContainer" containerID="becc3a99f880114ef1a12f111c21365920ead17ffb3ba93936683ea411b50486" Feb 24 00:05:57 crc kubenswrapper[4824]: E0224 00:05:57.828296 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:05:58 crc kubenswrapper[4824]: I0224 00:05:58.626434 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:58Z is after 2026-02-23T05:33:13Z Feb 24 00:05:58 crc kubenswrapper[4824]: I0224 00:05:58.638903 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:23:59.352083662 +0000 UTC Feb 24 00:05:59 crc kubenswrapper[4824]: W0224 00:05:59.302545 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:59Z is after 2026-02-23T05:33:13Z Feb 24 00:05:59 crc kubenswrapper[4824]: E0224 00:05:59.302652 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:59Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:05:59 crc kubenswrapper[4824]: I0224 00:05:59.624579 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:59Z is after 2026-02-23T05:33:13Z Feb 24 00:05:59 crc kubenswrapper[4824]: I0224 00:05:59.639877 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 13:32:26.675301132 +0000 UTC Feb 24 00:06:00 crc kubenswrapper[4824]: I0224 00:06:00.007955 4824 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 00:06:00 crc kubenswrapper[4824]: E0224 00:06:00.012221 4824 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:00Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:06:00 crc kubenswrapper[4824]: I0224 00:06:00.626239 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:00Z is after 2026-02-23T05:33:13Z Feb 24 00:06:00 crc kubenswrapper[4824]: I0224 00:06:00.641014 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 19:16:09.556710406 +0000 UTC Feb 24 00:06:01 crc kubenswrapper[4824]: E0224 00:06:01.360848 4824 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:01Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189705f6f4979d2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:05:36.61725009 +0000 UTC m=+0.606874579,LastTimestamp:2026-02-24 00:05:36.61725009 +0000 UTC m=+0.606874579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:06:01 crc kubenswrapper[4824]: I0224 00:06:01.626327 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:01Z is after 2026-02-23T05:33:13Z Feb 24 00:06:01 crc kubenswrapper[4824]: I0224 00:06:01.641802 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:37:42.838045738 +0000 UTC Feb 24 00:06:01 crc kubenswrapper[4824]: W0224 00:06:01.754690 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:01Z is after 2026-02-23T05:33:13Z Feb 24 00:06:01 crc kubenswrapper[4824]: E0224 00:06:01.754810 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:01Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:06:02 crc kubenswrapper[4824]: I0224 00:06:02.625025 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:02Z is after 2026-02-23T05:33:13Z Feb 24 00:06:02 crc kubenswrapper[4824]: I0224 00:06:02.642439 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:04:33.143647884 +0000 UTC Feb 24 00:06:03 crc kubenswrapper[4824]: I0224 00:06:03.521364 4824 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 00:06:03 crc kubenswrapper[4824]: I0224 00:06:03.521552 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 00:06:03 crc kubenswrapper[4824]: I0224 00:06:03.521654 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:06:03 crc kubenswrapper[4824]: I0224 00:06:03.521878 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:03 crc kubenswrapper[4824]: I0224 00:06:03.523569 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:03 crc kubenswrapper[4824]: I0224 00:06:03.523637 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:03 crc kubenswrapper[4824]: I0224 00:06:03.523656 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:03 crc kubenswrapper[4824]: I0224 00:06:03.524497 4824 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"ef63a3a20052bbda09997002dbbce1fd4cdf577f00711857db86b460ed4e8165"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 24 00:06:03 crc kubenswrapper[4824]: I0224 00:06:03.524845 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://ef63a3a20052bbda09997002dbbce1fd4cdf577f00711857db86b460ed4e8165" gracePeriod=30 Feb 24 00:06:03 crc kubenswrapper[4824]: I0224 00:06:03.625289 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:03Z is after 2026-02-23T05:33:13Z Feb 24 00:06:03 crc kubenswrapper[4824]: I0224 00:06:03.642657 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:35:56.839771428 +0000 UTC Feb 24 00:06:03 crc kubenswrapper[4824]: I0224 00:06:03.846855 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 00:06:03 crc kubenswrapper[4824]: I0224 00:06:03.847356 4824 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ef63a3a20052bbda09997002dbbce1fd4cdf577f00711857db86b460ed4e8165" exitCode=255 Feb 24 00:06:03 crc kubenswrapper[4824]: I0224 00:06:03.847415 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ef63a3a20052bbda09997002dbbce1fd4cdf577f00711857db86b460ed4e8165"} Feb 24 00:06:04 crc kubenswrapper[4824]: W0224 00:06:04.023002 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:04Z is after 2026-02-23T05:33:13Z Feb 24 00:06:04 crc kubenswrapper[4824]: E0224 00:06:04.023541 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:06:04 crc kubenswrapper[4824]: I0224 00:06:04.624759 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:04Z is after 2026-02-23T05:33:13Z Feb 24 00:06:04 crc kubenswrapper[4824]: I0224 00:06:04.643283 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 23:50:53.34762439 +0000 UTC Feb 24 00:06:04 crc kubenswrapper[4824]: I0224 00:06:04.751056 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:04 crc kubenswrapper[4824]: E0224 00:06:04.752703 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:04Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 24 00:06:04 crc kubenswrapper[4824]: I0224 00:06:04.753461 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:04 crc kubenswrapper[4824]: I0224 00:06:04.753548 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:04 crc kubenswrapper[4824]: I0224 00:06:04.753572 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:04 crc kubenswrapper[4824]: I0224 00:06:04.753618 4824 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:06:04 crc kubenswrapper[4824]: E0224 00:06:04.757989 4824 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:04Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 00:06:04 crc kubenswrapper[4824]: I0224 00:06:04.854349 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 00:06:04 crc kubenswrapper[4824]: I0224 00:06:04.854928 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8fe1cb1e114c1440630a33e3dd127af3600c520e37148cc01ab2e4ed346941ba"} Feb 24 00:06:04 crc kubenswrapper[4824]: I0224 00:06:04.855144 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:04 crc kubenswrapper[4824]: I0224 00:06:04.856576 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:04 crc kubenswrapper[4824]: I0224 00:06:04.856622 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:04 crc kubenswrapper[4824]: I0224 00:06:04.856635 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:05 crc kubenswrapper[4824]: I0224 00:06:05.625643 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:05Z is after 2026-02-23T05:33:13Z Feb 24 00:06:05 crc kubenswrapper[4824]: I0224 00:06:05.644207 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 02:42:58.458552667 +0000 UTC Feb 24 00:06:05 crc kubenswrapper[4824]: I0224 00:06:05.856880 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:05 crc kubenswrapper[4824]: I0224 00:06:05.857804 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:05 crc kubenswrapper[4824]: I0224 00:06:05.857832 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:05 crc kubenswrapper[4824]: I0224 00:06:05.857843 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:06 crc kubenswrapper[4824]: I0224 00:06:06.626138 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:06Z is after 2026-02-23T05:33:13Z Feb 24 00:06:06 crc kubenswrapper[4824]: I0224 00:06:06.645242 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:19:48.502240005 +0000 UTC Feb 24 00:06:06 crc kubenswrapper[4824]: E0224 00:06:06.752285 4824 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:06:07 crc kubenswrapper[4824]: W0224 00:06:07.058133 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:07Z is after 2026-02-23T05:33:13Z Feb 24 00:06:07 crc kubenswrapper[4824]: E0224 00:06:07.058225 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:07Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:06:07 crc kubenswrapper[4824]: I0224 00:06:07.623910 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:07Z is after 2026-02-23T05:33:13Z Feb 24 00:06:07 crc kubenswrapper[4824]: I0224 00:06:07.646385 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 05:05:12.383249084 +0000 UTC Feb 24 00:06:08 crc kubenswrapper[4824]: I0224 00:06:08.624169 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:08Z is after 2026-02-23T05:33:13Z Feb 24 00:06:08 crc kubenswrapper[4824]: I0224 00:06:08.647481 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 20:13:44.520690878 +0000 UTC Feb 24 00:06:09 crc kubenswrapper[4824]: I0224 00:06:09.626678 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:09Z is after 2026-02-23T05:33:13Z Feb 24 00:06:09 crc kubenswrapper[4824]: I0224 00:06:09.648099 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 05:21:10.623568405 +0000 UTC Feb 24 00:06:10 crc kubenswrapper[4824]: I0224 00:06:10.522383 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:06:10 crc kubenswrapper[4824]: I0224 00:06:10.522689 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:10 crc kubenswrapper[4824]: I0224 00:06:10.525026 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:10 crc kubenswrapper[4824]: I0224 00:06:10.525290 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:10 crc kubenswrapper[4824]: I0224 00:06:10.525457 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:10 crc kubenswrapper[4824]: I0224 00:06:10.624573 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:10Z is after 2026-02-23T05:33:13Z Feb 24 00:06:10 crc kubenswrapper[4824]: I0224 00:06:10.648720 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 01:04:19.766824538 +0000 UTC Feb 24 00:06:10 crc kubenswrapper[4824]: I0224 00:06:10.693592 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:10 crc kubenswrapper[4824]: I0224 00:06:10.695388 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:10 crc kubenswrapper[4824]: I0224 00:06:10.695577 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:10 crc kubenswrapper[4824]: I0224 00:06:10.695596 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:10 crc kubenswrapper[4824]: I0224 00:06:10.696414 4824 scope.go:117] "RemoveContainer" containerID="becc3a99f880114ef1a12f111c21365920ead17ffb3ba93936683ea411b50486" Feb 24 00:06:11 crc kubenswrapper[4824]: E0224 00:06:11.366756 4824 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:11Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189705f6f4979d2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:05:36.61725009 +0000 UTC m=+0.606874579,LastTimestamp:2026-02-24 00:05:36.61725009 +0000 UTC m=+0.606874579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:06:11 crc kubenswrapper[4824]: I0224 00:06:11.623856 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:11Z is after 2026-02-23T05:33:13Z Feb 24 00:06:11 crc kubenswrapper[4824]: I0224 00:06:11.649303 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:54:27.099766513 +0000 UTC Feb 24 00:06:11 crc kubenswrapper[4824]: E0224 00:06:11.757559 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:11Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 24 00:06:11 crc kubenswrapper[4824]: I0224 00:06:11.758368 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:11 crc kubenswrapper[4824]: I0224 00:06:11.759418 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:11 crc kubenswrapper[4824]: I0224 00:06:11.759461 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:11 crc kubenswrapper[4824]: I0224 00:06:11.759473 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:11 crc kubenswrapper[4824]: I0224 00:06:11.759504 4824 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:06:11 crc kubenswrapper[4824]: E0224 00:06:11.762802 4824 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:11Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 00:06:11 crc kubenswrapper[4824]: I0224 00:06:11.875090 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 00:06:11 crc kubenswrapper[4824]: I0224 00:06:11.875820 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 00:06:11 crc kubenswrapper[4824]: I0224 00:06:11.877815 4824 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1f1c1f049b88250d60dc213ff2c7023d6f8204c87e8e78cff43d73af450d7e46" exitCode=255 Feb 24 00:06:11 crc kubenswrapper[4824]: I0224 00:06:11.877900 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1f1c1f049b88250d60dc213ff2c7023d6f8204c87e8e78cff43d73af450d7e46"} Feb 24 00:06:11 crc kubenswrapper[4824]: I0224 00:06:11.877969 4824 scope.go:117] "RemoveContainer" containerID="becc3a99f880114ef1a12f111c21365920ead17ffb3ba93936683ea411b50486" Feb 24 00:06:11 crc kubenswrapper[4824]: I0224 00:06:11.878253 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:11 crc kubenswrapper[4824]: I0224 00:06:11.879362 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:11 crc kubenswrapper[4824]: I0224 00:06:11.879414 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:11 crc kubenswrapper[4824]: I0224 00:06:11.879430 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:11 crc kubenswrapper[4824]: I0224 00:06:11.880275 4824 scope.go:117] "RemoveContainer" containerID="1f1c1f049b88250d60dc213ff2c7023d6f8204c87e8e78cff43d73af450d7e46" Feb 24 00:06:11 crc kubenswrapper[4824]: E0224 00:06:11.881344 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:06:12 crc kubenswrapper[4824]: I0224 00:06:12.625448 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z Feb 24 00:06:12 crc kubenswrapper[4824]: I0224 00:06:12.649898 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:54:55.494167556 +0000 UTC Feb 24 00:06:12 crc kubenswrapper[4824]: I0224 00:06:12.884200 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 00:06:13 crc kubenswrapper[4824]: I0224 00:06:13.522592 4824 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 00:06:13 crc kubenswrapper[4824]: I0224 00:06:13.522742 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 00:06:13 crc kubenswrapper[4824]: I0224 00:06:13.625733 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:13Z is after 2026-02-23T05:33:13Z Feb 24 00:06:13 crc kubenswrapper[4824]: I0224 00:06:13.649993 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 20:16:53.33940236 +0000 UTC Feb 24 00:06:14 crc kubenswrapper[4824]: W0224 00:06:14.603328 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:14Z is after 2026-02-23T05:33:13Z Feb 24 00:06:14 crc kubenswrapper[4824]: E0224 00:06:14.603448 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:14Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:06:14 crc kubenswrapper[4824]: I0224 00:06:14.624072 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:14Z is after 2026-02-23T05:33:13Z Feb 24 00:06:14 crc kubenswrapper[4824]: I0224 00:06:14.650337 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 12:49:39.919722023 +0000 UTC Feb 24 00:06:14 crc kubenswrapper[4824]: I0224 00:06:14.680822 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:06:14 crc kubenswrapper[4824]: I0224 00:06:14.681122 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:14 crc kubenswrapper[4824]: I0224 00:06:14.682942 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:14 crc kubenswrapper[4824]: I0224 00:06:14.683000 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:14 crc kubenswrapper[4824]: I0224 00:06:14.683009 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:15 crc kubenswrapper[4824]: I0224 00:06:15.625946 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:15Z is after 2026-02-23T05:33:13Z Feb 24 00:06:15 crc kubenswrapper[4824]: I0224 00:06:15.651274 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 01:48:09.262294545 +0000 UTC Feb 24 00:06:16 crc kubenswrapper[4824]: I0224 00:06:16.625769 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:16Z is after 2026-02-23T05:33:13Z Feb 24 00:06:16 crc kubenswrapper[4824]: I0224 00:06:16.652114 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:07:43.728990135 +0000 UTC Feb 24 00:06:16 crc kubenswrapper[4824]: E0224 00:06:16.752635 4824 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:06:16 crc kubenswrapper[4824]: I0224 00:06:16.927775 4824 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 00:06:16 crc kubenswrapper[4824]: E0224 00:06:16.934427 4824 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:16Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:06:16 crc kubenswrapper[4824]: E0224 00:06:16.935722 4824 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Feb 24 00:06:17 crc kubenswrapper[4824]: I0224 00:06:17.571427 4824 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:06:17 crc kubenswrapper[4824]: I0224 00:06:17.571650 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:17 crc kubenswrapper[4824]: I0224 00:06:17.573174 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:17 crc kubenswrapper[4824]: I0224 00:06:17.573272 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:17 crc kubenswrapper[4824]: I0224 00:06:17.573302 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:17 crc kubenswrapper[4824]: I0224 00:06:17.574250 4824 scope.go:117] "RemoveContainer" containerID="1f1c1f049b88250d60dc213ff2c7023d6f8204c87e8e78cff43d73af450d7e46" Feb 24 00:06:17 crc kubenswrapper[4824]: E0224 00:06:17.574617 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:06:17 crc kubenswrapper[4824]: I0224 00:06:17.623791 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:13Z Feb 24 00:06:17 crc kubenswrapper[4824]: I0224 00:06:17.653163 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 04:15:27.337103489 +0000 UTC Feb 24 00:06:18 crc kubenswrapper[4824]: I0224 00:06:18.623836 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:13Z Feb 24 00:06:18 crc kubenswrapper[4824]: I0224 00:06:18.653437 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 23:35:08.959263457 +0000 UTC Feb 24 00:06:18 crc kubenswrapper[4824]: E0224 00:06:18.760492 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 24 00:06:18 crc kubenswrapper[4824]: I0224 00:06:18.763792 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:18 crc kubenswrapper[4824]: I0224 00:06:18.765565 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:18 crc kubenswrapper[4824]: I0224 00:06:18.765645 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:18 crc kubenswrapper[4824]: I0224 00:06:18.765674 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:18 crc kubenswrapper[4824]: I0224 00:06:18.765723 4824 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:06:18 crc kubenswrapper[4824]: E0224 00:06:18.768322 4824 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 00:06:18 crc kubenswrapper[4824]: W0224 00:06:18.914981 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:13Z Feb 24 00:06:18 crc kubenswrapper[4824]: E0224 00:06:18.915120 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:06:19 crc kubenswrapper[4824]: I0224 00:06:19.626280 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:19Z is after 2026-02-23T05:33:13Z Feb 24 00:06:19 crc kubenswrapper[4824]: I0224 00:06:19.653900 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:34:56.182997188 +0000 UTC Feb 24 00:06:20 crc kubenswrapper[4824]: W0224 00:06:20.020411 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:20Z is after 2026-02-23T05:33:13Z Feb 24 00:06:20 crc kubenswrapper[4824]: E0224 00:06:20.020586 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:20Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:06:20 crc kubenswrapper[4824]: I0224 00:06:20.357586 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:06:20 crc kubenswrapper[4824]: I0224 00:06:20.357943 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:20 crc kubenswrapper[4824]: I0224 00:06:20.359856 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:20 crc kubenswrapper[4824]: I0224 00:06:20.359934 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:20 crc kubenswrapper[4824]: I0224 00:06:20.359955 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:20 crc kubenswrapper[4824]: I0224 00:06:20.360790 4824 scope.go:117] "RemoveContainer" containerID="1f1c1f049b88250d60dc213ff2c7023d6f8204c87e8e78cff43d73af450d7e46" Feb 24 00:06:20 crc kubenswrapper[4824]: E0224 00:06:20.361032 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:06:20 crc kubenswrapper[4824]: I0224 00:06:20.626621 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:20Z is after 2026-02-23T05:33:13Z Feb 24 00:06:20 crc kubenswrapper[4824]: I0224 00:06:20.655171 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 11:25:32.459802537 +0000 UTC Feb 24 00:06:21 crc kubenswrapper[4824]: E0224 00:06:21.372573 4824 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:21Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189705f6f4979d2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:05:36.61725009 +0000 UTC m=+0.606874579,LastTimestamp:2026-02-24 00:05:36.61725009 +0000 UTC m=+0.606874579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:06:21 crc kubenswrapper[4824]: I0224 00:06:21.624796 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:21Z is after 2026-02-23T05:33:13Z Feb 24 00:06:21 crc kubenswrapper[4824]: I0224 00:06:21.655412 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 11:30:23.519255167 +0000 UTC Feb 24 00:06:22 crc kubenswrapper[4824]: I0224 00:06:22.626374 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:22Z is after 2026-02-23T05:33:13Z Feb 24 00:06:22 crc kubenswrapper[4824]: I0224 00:06:22.656044 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:07:20.262902446 +0000 UTC Feb 24 00:06:23 crc kubenswrapper[4824]: I0224 00:06:23.521774 4824 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 00:06:23 crc kubenswrapper[4824]: I0224 00:06:23.521901 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 00:06:23 crc kubenswrapper[4824]: I0224 00:06:23.625988 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:23Z is after 2026-02-23T05:33:13Z Feb 24 00:06:23 crc kubenswrapper[4824]: I0224 00:06:23.656429 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:49:16.844178692 +0000 UTC Feb 24 00:06:24 crc kubenswrapper[4824]: I0224 00:06:24.624633 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:24Z is after 2026-02-23T05:33:13Z Feb 24 00:06:24 crc kubenswrapper[4824]: I0224 00:06:24.656717 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:38:49.301233849 +0000 UTC Feb 24 00:06:25 crc kubenswrapper[4824]: I0224 00:06:25.042837 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:06:25 crc kubenswrapper[4824]: I0224 00:06:25.043057 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:25 crc kubenswrapper[4824]: I0224 00:06:25.044667 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:25 crc kubenswrapper[4824]: I0224 00:06:25.044705 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:25 crc kubenswrapper[4824]: I0224 00:06:25.044716 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:25 crc kubenswrapper[4824]: W0224 00:06:25.046959 4824 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:25Z is after 2026-02-23T05:33:13Z Feb 24 00:06:25 crc kubenswrapper[4824]: E0224 00:06:25.047059 4824 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:25Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:06:25 crc kubenswrapper[4824]: I0224 00:06:25.625648 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:25Z is after 2026-02-23T05:33:13Z Feb 24 00:06:25 crc kubenswrapper[4824]: I0224 00:06:25.657629 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:18:32.420175452 +0000 UTC Feb 24 00:06:25 crc kubenswrapper[4824]: E0224 00:06:25.767212 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:25Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 24 00:06:25 crc kubenswrapper[4824]: I0224 00:06:25.768601 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:25 crc kubenswrapper[4824]: I0224 00:06:25.770434 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:25 crc kubenswrapper[4824]: I0224 00:06:25.770501 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:25 crc kubenswrapper[4824]: I0224 00:06:25.770556 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:25 crc kubenswrapper[4824]: I0224 00:06:25.770611 4824 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:06:25 crc kubenswrapper[4824]: E0224 00:06:25.776169 4824 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:25Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 00:06:26 crc kubenswrapper[4824]: I0224 00:06:26.625623 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:26Z is after 2026-02-23T05:33:13Z Feb 24 00:06:26 crc kubenswrapper[4824]: I0224 00:06:26.658339 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 17:44:08.16625045 +0000 UTC Feb 24 00:06:26 crc kubenswrapper[4824]: E0224 00:06:26.752807 4824 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:06:27 crc kubenswrapper[4824]: I0224 00:06:27.624423 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:27Z is after 2026-02-23T05:33:13Z Feb 24 00:06:27 crc kubenswrapper[4824]: I0224 00:06:27.658725 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:15:20.729260485 +0000 UTC Feb 24 00:06:28 crc kubenswrapper[4824]: I0224 00:06:28.626657 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:28Z is after 2026-02-23T05:33:13Z Feb 24 00:06:28 crc kubenswrapper[4824]: I0224 00:06:28.659576 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 12:29:16.388457681 +0000 UTC Feb 24 00:06:29 crc kubenswrapper[4824]: I0224 00:06:29.625349 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:29Z is after 2026-02-23T05:33:13Z Feb 24 00:06:29 crc kubenswrapper[4824]: I0224 00:06:29.660640 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:51:25.431108001 +0000 UTC Feb 24 00:06:30 crc kubenswrapper[4824]: I0224 00:06:30.626209 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:30Z is after 2026-02-23T05:33:13Z Feb 24 00:06:30 crc kubenswrapper[4824]: I0224 00:06:30.660843 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:07:06.745187889 +0000 UTC Feb 24 00:06:31 crc kubenswrapper[4824]: E0224 00:06:31.379065 4824 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:31Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189705f6f4979d2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:05:36.61725009 +0000 UTC m=+0.606874579,LastTimestamp:2026-02-24 00:05:36.61725009 +0000 UTC m=+0.606874579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:06:31 crc kubenswrapper[4824]: I0224 00:06:31.624697 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:31Z is after 2026-02-23T05:33:13Z Feb 24 00:06:31 crc kubenswrapper[4824]: I0224 00:06:31.661599 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:51:41.262964702 +0000 UTC Feb 24 00:06:32 crc kubenswrapper[4824]: I0224 00:06:32.625368 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:32Z is after 2026-02-23T05:33:13Z Feb 24 00:06:32 crc kubenswrapper[4824]: I0224 00:06:32.662045 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:30:52.4427183 +0000 UTC Feb 24 00:06:32 crc kubenswrapper[4824]: E0224 00:06:32.773081 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:32Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 24 00:06:32 crc kubenswrapper[4824]: I0224 00:06:32.776317 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:32 crc kubenswrapper[4824]: I0224 00:06:32.777963 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:32 crc kubenswrapper[4824]: I0224 00:06:32.778002 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:32 crc kubenswrapper[4824]: I0224 00:06:32.778016 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:32 crc kubenswrapper[4824]: I0224 00:06:32.778048 4824 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:06:32 crc kubenswrapper[4824]: E0224 00:06:32.781762 4824 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:32Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.522643 4824 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.522796 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.522904 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.523140 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.524903 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.524967 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.524992 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.525868 4824 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"8fe1cb1e114c1440630a33e3dd127af3600c520e37148cc01ab2e4ed346941ba"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.526098 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://8fe1cb1e114c1440630a33e3dd127af3600c520e37148cc01ab2e4ed346941ba" gracePeriod=30 Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.626774 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:33Z is after 2026-02-23T05:33:13Z Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.663030 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:37:49.088062821 +0000 UTC Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.957587 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.959113 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.959887 4824 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8fe1cb1e114c1440630a33e3dd127af3600c520e37148cc01ab2e4ed346941ba" exitCode=255 Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.959950 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8fe1cb1e114c1440630a33e3dd127af3600c520e37148cc01ab2e4ed346941ba"} Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.960002 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"706813f751a53e8e88d84668245bb4b94bc39d6b611f0fed9f774c226dc8c632"} Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.960035 4824 scope.go:117] "RemoveContainer" containerID="ef63a3a20052bbda09997002dbbce1fd4cdf577f00711857db86b460ed4e8165" Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.960288 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.962646 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.962699 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:33 crc kubenswrapper[4824]: I0224 00:06:33.962713 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:34 crc kubenswrapper[4824]: I0224 00:06:34.623733 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:34Z is after 2026-02-23T05:33:13Z Feb 24 00:06:34 crc kubenswrapper[4824]: I0224 00:06:34.663583 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 19:09:55.200913841 +0000 UTC Feb 24 00:06:34 crc kubenswrapper[4824]: I0224 00:06:34.680861 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:06:34 crc kubenswrapper[4824]: I0224 00:06:34.967329 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 24 00:06:34 crc kubenswrapper[4824]: I0224 00:06:34.969406 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:34 crc kubenswrapper[4824]: I0224 00:06:34.970829 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:34 crc kubenswrapper[4824]: I0224 00:06:34.970904 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:34 crc kubenswrapper[4824]: I0224 00:06:34.970933 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:35 crc kubenswrapper[4824]: I0224 00:06:35.625158 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:35Z is after 2026-02-23T05:33:13Z Feb 24 00:06:35 crc kubenswrapper[4824]: I0224 00:06:35.664778 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:13:17.166117538 +0000 UTC Feb 24 00:06:35 crc kubenswrapper[4824]: I0224 00:06:35.693306 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:35 crc kubenswrapper[4824]: I0224 00:06:35.695227 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:35 crc kubenswrapper[4824]: I0224 00:06:35.695277 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:35 crc kubenswrapper[4824]: I0224 00:06:35.695291 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:35 crc kubenswrapper[4824]: I0224 00:06:35.696092 4824 scope.go:117] "RemoveContainer" containerID="1f1c1f049b88250d60dc213ff2c7023d6f8204c87e8e78cff43d73af450d7e46" Feb 24 00:06:35 crc kubenswrapper[4824]: I0224 00:06:35.975408 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 00:06:35 crc kubenswrapper[4824]: I0224 00:06:35.977051 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a"} Feb 24 00:06:35 crc kubenswrapper[4824]: I0224 00:06:35.977242 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:35 crc kubenswrapper[4824]: I0224 00:06:35.978154 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:35 crc kubenswrapper[4824]: I0224 00:06:35.978189 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:35 crc kubenswrapper[4824]: I0224 00:06:35.978198 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:36 crc kubenswrapper[4824]: I0224 00:06:36.623928 4824 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:36Z is after 2026-02-23T05:33:13Z Feb 24 00:06:36 crc kubenswrapper[4824]: I0224 00:06:36.665588 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 20:55:11.83130633 +0000 UTC Feb 24 00:06:36 crc kubenswrapper[4824]: E0224 00:06:36.753307 4824 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:06:36 crc kubenswrapper[4824]: I0224 00:06:36.984987 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 24 00:06:36 crc kubenswrapper[4824]: I0224 00:06:36.985830 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 00:06:36 crc kubenswrapper[4824]: I0224 00:06:36.988725 4824 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a" exitCode=255 Feb 24 00:06:36 crc kubenswrapper[4824]: I0224 00:06:36.988773 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a"} Feb 24 00:06:36 crc kubenswrapper[4824]: I0224 00:06:36.988820 4824 scope.go:117] "RemoveContainer" containerID="1f1c1f049b88250d60dc213ff2c7023d6f8204c87e8e78cff43d73af450d7e46" Feb 24 00:06:36 crc kubenswrapper[4824]: I0224 00:06:36.989097 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:36 crc kubenswrapper[4824]: I0224 00:06:36.990778 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:36 crc kubenswrapper[4824]: I0224 00:06:36.990871 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:36 crc kubenswrapper[4824]: I0224 00:06:36.990963 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:36 crc kubenswrapper[4824]: I0224 00:06:36.991638 4824 scope.go:117] "RemoveContainer" containerID="b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a" Feb 24 00:06:36 crc kubenswrapper[4824]: E0224 00:06:36.991858 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:06:37 crc kubenswrapper[4824]: I0224 00:06:37.456865 4824 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 24 00:06:37 crc kubenswrapper[4824]: I0224 00:06:37.571741 4824 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:06:37 crc kubenswrapper[4824]: I0224 00:06:37.667672 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:19:47.09999595 +0000 UTC Feb 24 00:06:37 crc kubenswrapper[4824]: I0224 00:06:37.993686 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 24 00:06:37 crc kubenswrapper[4824]: I0224 00:06:37.997372 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:37 crc kubenswrapper[4824]: I0224 00:06:37.999163 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:37 crc kubenswrapper[4824]: I0224 00:06:37.999247 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:37 crc kubenswrapper[4824]: I0224 00:06:37.999271 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:38 crc kubenswrapper[4824]: I0224 00:06:38.000482 4824 scope.go:117] "RemoveContainer" containerID="b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a" Feb 24 00:06:38 crc kubenswrapper[4824]: E0224 00:06:38.001047 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:06:38 crc kubenswrapper[4824]: I0224 00:06:38.668207 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:49:56.39286261 +0000 UTC Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.668982 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:51:26.874709425 +0000 UTC Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.781855 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.783424 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.783858 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.784067 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.784383 4824 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.794088 4824 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.794559 4824 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 24 00:06:39 crc kubenswrapper[4824]: E0224 00:06:39.794591 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.800212 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.800573 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.800818 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.801009 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.801177 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:39Z","lastTransitionTime":"2026-02-24T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:39 crc kubenswrapper[4824]: E0224 00:06:39.820329 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.833263 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.833697 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.833919 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.834122 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.834312 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:39Z","lastTransitionTime":"2026-02-24T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:39 crc kubenswrapper[4824]: E0224 00:06:39.850832 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.859870 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.859913 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.859927 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.859952 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.859968 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:39Z","lastTransitionTime":"2026-02-24T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:39 crc kubenswrapper[4824]: E0224 00:06:39.873546 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.886178 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.886209 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.886225 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.886244 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:39 crc kubenswrapper[4824]: I0224 00:06:39.886261 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:39Z","lastTransitionTime":"2026-02-24T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:39 crc kubenswrapper[4824]: E0224 00:06:39.899000 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:39 crc kubenswrapper[4824]: E0224 00:06:39.899628 4824 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:06:39 crc kubenswrapper[4824]: E0224 00:06:39.899836 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:40 crc kubenswrapper[4824]: E0224 00:06:40.000058 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:40 crc kubenswrapper[4824]: E0224 00:06:40.101110 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:40 crc kubenswrapper[4824]: E0224 00:06:40.202617 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:40 crc kubenswrapper[4824]: E0224 00:06:40.303461 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:40 crc kubenswrapper[4824]: I0224 00:06:40.357650 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:06:40 crc kubenswrapper[4824]: I0224 00:06:40.358145 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:40 crc kubenswrapper[4824]: I0224 00:06:40.359770 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:40 crc kubenswrapper[4824]: I0224 00:06:40.359852 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:40 crc kubenswrapper[4824]: I0224 00:06:40.359872 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:40 crc kubenswrapper[4824]: I0224 00:06:40.361250 4824 scope.go:117] "RemoveContainer" containerID="b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a" Feb 24 00:06:40 crc kubenswrapper[4824]: E0224 00:06:40.361632 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:06:40 crc kubenswrapper[4824]: E0224 00:06:40.404309 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:40 crc kubenswrapper[4824]: E0224 00:06:40.504790 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:40 crc kubenswrapper[4824]: I0224 00:06:40.521299 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:06:40 crc kubenswrapper[4824]: I0224 00:06:40.521592 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:40 crc kubenswrapper[4824]: I0224 00:06:40.523395 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:40 crc kubenswrapper[4824]: I0224 00:06:40.523464 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:40 crc kubenswrapper[4824]: I0224 00:06:40.523484 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:40 crc kubenswrapper[4824]: E0224 00:06:40.605077 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:40 crc kubenswrapper[4824]: I0224 00:06:40.669440 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 23:31:20.358464057 +0000 UTC Feb 24 00:06:40 crc kubenswrapper[4824]: E0224 00:06:40.705445 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:40 crc kubenswrapper[4824]: E0224 00:06:40.805901 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:40 crc kubenswrapper[4824]: E0224 00:06:40.906317 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:41 crc kubenswrapper[4824]: E0224 00:06:41.006618 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:41 crc kubenswrapper[4824]: E0224 00:06:41.107970 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:41 crc kubenswrapper[4824]: E0224 00:06:41.208092 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:41 crc kubenswrapper[4824]: E0224 00:06:41.309261 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:41 crc kubenswrapper[4824]: E0224 00:06:41.410271 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:41 crc kubenswrapper[4824]: E0224 00:06:41.510724 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:41 crc kubenswrapper[4824]: E0224 00:06:41.611590 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:41 crc kubenswrapper[4824]: I0224 00:06:41.674686 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 07:44:32.202341001 +0000 UTC Feb 24 00:06:41 crc kubenswrapper[4824]: I0224 00:06:41.692959 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:41 crc kubenswrapper[4824]: I0224 00:06:41.694471 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:41 crc kubenswrapper[4824]: I0224 00:06:41.694538 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:41 crc kubenswrapper[4824]: I0224 00:06:41.694550 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:41 crc kubenswrapper[4824]: E0224 00:06:41.711951 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:41 crc kubenswrapper[4824]: E0224 00:06:41.812811 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:41 crc kubenswrapper[4824]: E0224 00:06:41.913429 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:42 crc kubenswrapper[4824]: E0224 00:06:42.013547 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:42 crc kubenswrapper[4824]: E0224 00:06:42.113970 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:42 crc kubenswrapper[4824]: E0224 00:06:42.214660 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:42 crc kubenswrapper[4824]: E0224 00:06:42.315498 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:42 crc kubenswrapper[4824]: E0224 00:06:42.416046 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:42 crc kubenswrapper[4824]: E0224 00:06:42.516444 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:42 crc kubenswrapper[4824]: E0224 00:06:42.616893 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:42 crc kubenswrapper[4824]: I0224 00:06:42.675891 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:23:50.034027917 +0000 UTC Feb 24 00:06:42 crc kubenswrapper[4824]: E0224 00:06:42.717774 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:42 crc kubenswrapper[4824]: E0224 00:06:42.818667 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:42 crc kubenswrapper[4824]: I0224 00:06:42.859125 4824 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 00:06:42 crc kubenswrapper[4824]: E0224 00:06:42.918859 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:43 crc kubenswrapper[4824]: E0224 00:06:43.019260 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:43 crc kubenswrapper[4824]: E0224 00:06:43.119567 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:43 crc kubenswrapper[4824]: E0224 00:06:43.220267 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:43 crc kubenswrapper[4824]: E0224 00:06:43.320462 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:43 crc kubenswrapper[4824]: E0224 00:06:43.421323 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:43 crc kubenswrapper[4824]: E0224 00:06:43.522251 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:43 crc kubenswrapper[4824]: I0224 00:06:43.522292 4824 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 00:06:43 crc kubenswrapper[4824]: I0224 00:06:43.522354 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 00:06:43 crc kubenswrapper[4824]: E0224 00:06:43.622353 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:43 crc kubenswrapper[4824]: I0224 00:06:43.676658 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 09:27:37.873862517 +0000 UTC Feb 24 00:06:43 crc kubenswrapper[4824]: E0224 00:06:43.722669 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:43 crc kubenswrapper[4824]: E0224 00:06:43.822918 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:43 crc kubenswrapper[4824]: E0224 00:06:43.923736 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:44 crc kubenswrapper[4824]: E0224 00:06:44.024141 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:44 crc kubenswrapper[4824]: E0224 00:06:44.124860 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:44 crc kubenswrapper[4824]: E0224 00:06:44.225207 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:44 crc kubenswrapper[4824]: E0224 00:06:44.325440 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:44 crc kubenswrapper[4824]: E0224 00:06:44.426458 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:44 crc kubenswrapper[4824]: E0224 00:06:44.527191 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:44 crc kubenswrapper[4824]: E0224 00:06:44.627828 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:44 crc kubenswrapper[4824]: I0224 00:06:44.676825 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:35:28.969885904 +0000 UTC Feb 24 00:06:44 crc kubenswrapper[4824]: E0224 00:06:44.728191 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:44 crc kubenswrapper[4824]: E0224 00:06:44.828733 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:44 crc kubenswrapper[4824]: E0224 00:06:44.929625 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:45 crc kubenswrapper[4824]: E0224 00:06:45.029811 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:45 crc kubenswrapper[4824]: E0224 00:06:45.130728 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:45 crc kubenswrapper[4824]: E0224 00:06:45.231589 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:45 crc kubenswrapper[4824]: E0224 00:06:45.332660 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:45 crc kubenswrapper[4824]: E0224 00:06:45.433666 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:45 crc kubenswrapper[4824]: E0224 00:06:45.534768 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:45 crc kubenswrapper[4824]: E0224 00:06:45.634898 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:45 crc kubenswrapper[4824]: I0224 00:06:45.677410 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:57:14.559341208 +0000 UTC Feb 24 00:06:45 crc kubenswrapper[4824]: E0224 00:06:45.735217 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:45 crc kubenswrapper[4824]: E0224 00:06:45.835887 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:45 crc kubenswrapper[4824]: E0224 00:06:45.936803 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:46 crc kubenswrapper[4824]: E0224 00:06:46.037108 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:46 crc kubenswrapper[4824]: E0224 00:06:46.138021 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:46 crc kubenswrapper[4824]: E0224 00:06:46.238336 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:46 crc kubenswrapper[4824]: E0224 00:06:46.339986 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:46 crc kubenswrapper[4824]: E0224 00:06:46.440736 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:46 crc kubenswrapper[4824]: E0224 00:06:46.541665 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:46 crc kubenswrapper[4824]: E0224 00:06:46.642885 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:46 crc kubenswrapper[4824]: I0224 00:06:46.678257 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 06:42:05.214927626 +0000 UTC Feb 24 00:06:46 crc kubenswrapper[4824]: E0224 00:06:46.744122 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:46 crc kubenswrapper[4824]: E0224 00:06:46.754274 4824 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:06:46 crc kubenswrapper[4824]: E0224 00:06:46.845004 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:46 crc kubenswrapper[4824]: E0224 00:06:46.945794 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:47 crc kubenswrapper[4824]: E0224 00:06:47.046448 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:47 crc kubenswrapper[4824]: E0224 00:06:47.147141 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:47 crc kubenswrapper[4824]: E0224 00:06:47.247789 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:47 crc kubenswrapper[4824]: E0224 00:06:47.348117 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:47 crc kubenswrapper[4824]: E0224 00:06:47.448822 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:47 crc kubenswrapper[4824]: E0224 00:06:47.550037 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:47 crc kubenswrapper[4824]: E0224 00:06:47.650800 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:47 crc kubenswrapper[4824]: I0224 00:06:47.679207 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 03:30:39.517843055 +0000 UTC Feb 24 00:06:47 crc kubenswrapper[4824]: E0224 00:06:47.751057 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:47 crc kubenswrapper[4824]: E0224 00:06:47.852010 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:47 crc kubenswrapper[4824]: E0224 00:06:47.952711 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:48 crc kubenswrapper[4824]: E0224 00:06:48.054003 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:48 crc kubenswrapper[4824]: E0224 00:06:48.154965 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:48 crc kubenswrapper[4824]: E0224 00:06:48.255765 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:48 crc kubenswrapper[4824]: E0224 00:06:48.356658 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:48 crc kubenswrapper[4824]: E0224 00:06:48.457677 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:48 crc kubenswrapper[4824]: E0224 00:06:48.558603 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:48 crc kubenswrapper[4824]: E0224 00:06:48.659447 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:48 crc kubenswrapper[4824]: I0224 00:06:48.679827 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:28:43.858003213 +0000 UTC Feb 24 00:06:48 crc kubenswrapper[4824]: E0224 00:06:48.760535 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:48 crc kubenswrapper[4824]: E0224 00:06:48.861485 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:48 crc kubenswrapper[4824]: I0224 00:06:48.937453 4824 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 00:06:48 crc kubenswrapper[4824]: I0224 00:06:48.950036 4824 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 00:06:48 crc kubenswrapper[4824]: E0224 00:06:48.962385 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:49 crc kubenswrapper[4824]: E0224 00:06:49.063113 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:49 crc kubenswrapper[4824]: E0224 00:06:49.164009 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:49 crc kubenswrapper[4824]: E0224 00:06:49.264229 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:49 crc kubenswrapper[4824]: E0224 00:06:49.365343 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:49 crc kubenswrapper[4824]: E0224 00:06:49.466334 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:49 crc kubenswrapper[4824]: E0224 00:06:49.566500 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:49 crc kubenswrapper[4824]: E0224 00:06:49.667324 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:49 crc kubenswrapper[4824]: I0224 00:06:49.673856 4824 csr.go:261] certificate signing request csr-s9jhh is approved, waiting to be issued Feb 24 00:06:49 crc kubenswrapper[4824]: I0224 00:06:49.681974 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 09:04:51.106511388 +0000 UTC Feb 24 00:06:49 crc kubenswrapper[4824]: I0224 00:06:49.716343 4824 csr.go:257] certificate signing request csr-s9jhh is issued Feb 24 00:06:49 crc kubenswrapper[4824]: E0224 00:06:49.769748 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:49 crc kubenswrapper[4824]: E0224 00:06:49.870913 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:49 crc kubenswrapper[4824]: E0224 00:06:49.965298 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 24 00:06:49 crc kubenswrapper[4824]: I0224 00:06:49.969392 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:49 crc kubenswrapper[4824]: I0224 00:06:49.969433 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:49 crc kubenswrapper[4824]: I0224 00:06:49.969447 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:49 crc kubenswrapper[4824]: I0224 00:06:49.969482 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:49 crc kubenswrapper[4824]: I0224 00:06:49.969494 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:49Z","lastTransitionTime":"2026-02-24T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:49 crc kubenswrapper[4824]: E0224 00:06:49.979498 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:49 crc kubenswrapper[4824]: I0224 00:06:49.987750 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:49 crc kubenswrapper[4824]: I0224 00:06:49.987798 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:49 crc kubenswrapper[4824]: I0224 00:06:49.987810 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:49 crc kubenswrapper[4824]: I0224 00:06:49.987830 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:49 crc kubenswrapper[4824]: I0224 00:06:49.987842 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:49Z","lastTransitionTime":"2026-02-24T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:49 crc kubenswrapper[4824]: E0224 00:06:49.998318 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.001742 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.001788 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.001801 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.001820 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.001841 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:50Z","lastTransitionTime":"2026-02-24T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:50 crc kubenswrapper[4824]: E0224 00:06:50.011801 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.017377 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.017415 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.017428 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.017449 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.017461 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:50Z","lastTransitionTime":"2026-02-24T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:50 crc kubenswrapper[4824]: E0224 00:06:50.041336 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:50 crc kubenswrapper[4824]: E0224 00:06:50.041473 4824 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:06:50 crc kubenswrapper[4824]: E0224 00:06:50.041500 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:50 crc kubenswrapper[4824]: E0224 00:06:50.142734 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:50 crc kubenswrapper[4824]: E0224 00:06:50.243305 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:50 crc kubenswrapper[4824]: E0224 00:06:50.343989 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:50 crc kubenswrapper[4824]: E0224 00:06:50.444884 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.537600 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.537803 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.539055 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.539089 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.539100 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.544733 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:06:50 crc kubenswrapper[4824]: E0224 00:06:50.545872 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:50 crc kubenswrapper[4824]: E0224 00:06:50.646318 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.682755 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:50:38.041379468 +0000 UTC Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.718152 4824 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 00:01:49 +0000 UTC, rotation deadline is 2026-11-10 02:17:58.057624157 +0000 UTC Feb 24 00:06:50 crc kubenswrapper[4824]: I0224 00:06:50.718212 4824 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6218h11m7.339415298s for next certificate rotation Feb 24 00:06:50 crc kubenswrapper[4824]: E0224 00:06:50.746703 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:50 crc kubenswrapper[4824]: E0224 00:06:50.847584 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:50 crc kubenswrapper[4824]: E0224 00:06:50.948124 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:51 crc kubenswrapper[4824]: I0224 00:06:51.035556 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:51 crc kubenswrapper[4824]: I0224 00:06:51.036681 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:51 crc kubenswrapper[4824]: I0224 00:06:51.036723 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:51 crc kubenswrapper[4824]: I0224 00:06:51.036734 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:51 crc kubenswrapper[4824]: E0224 00:06:51.048422 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:51 crc kubenswrapper[4824]: E0224 00:06:51.149512 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:51 crc kubenswrapper[4824]: E0224 00:06:51.252672 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:51 crc kubenswrapper[4824]: E0224 00:06:51.353358 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:51 crc kubenswrapper[4824]: E0224 00:06:51.454320 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:51 crc kubenswrapper[4824]: E0224 00:06:51.554670 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:51 crc kubenswrapper[4824]: E0224 00:06:51.654916 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:51 crc kubenswrapper[4824]: I0224 00:06:51.683316 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:14:38.933680681 +0000 UTC Feb 24 00:06:51 crc kubenswrapper[4824]: E0224 00:06:51.755104 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:51 crc kubenswrapper[4824]: E0224 00:06:51.855978 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:51 crc kubenswrapper[4824]: E0224 00:06:51.956828 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:52 crc kubenswrapper[4824]: E0224 00:06:52.057676 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:52 crc kubenswrapper[4824]: E0224 00:06:52.158775 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:52 crc kubenswrapper[4824]: E0224 00:06:52.259410 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:52 crc kubenswrapper[4824]: E0224 00:06:52.360299 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:52 crc kubenswrapper[4824]: E0224 00:06:52.461240 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:52 crc kubenswrapper[4824]: E0224 00:06:52.562333 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:52 crc kubenswrapper[4824]: E0224 00:06:52.663369 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:52 crc kubenswrapper[4824]: I0224 00:06:52.684043 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 04:09:35.253055869 +0000 UTC Feb 24 00:06:52 crc kubenswrapper[4824]: E0224 00:06:52.763914 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:52 crc kubenswrapper[4824]: E0224 00:06:52.864669 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:52 crc kubenswrapper[4824]: E0224 00:06:52.965309 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:53 crc kubenswrapper[4824]: E0224 00:06:53.065972 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:53 crc kubenswrapper[4824]: E0224 00:06:53.166702 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:53 crc kubenswrapper[4824]: E0224 00:06:53.267755 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:53 crc kubenswrapper[4824]: E0224 00:06:53.368059 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:53 crc kubenswrapper[4824]: E0224 00:06:53.468585 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:53 crc kubenswrapper[4824]: E0224 00:06:53.569689 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:53 crc kubenswrapper[4824]: E0224 00:06:53.670537 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:53 crc kubenswrapper[4824]: I0224 00:06:53.684802 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 04:09:00.561762235 +0000 UTC Feb 24 00:06:53 crc kubenswrapper[4824]: E0224 00:06:53.771563 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:53 crc kubenswrapper[4824]: E0224 00:06:53.872564 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:53 crc kubenswrapper[4824]: E0224 00:06:53.973229 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:54 crc kubenswrapper[4824]: E0224 00:06:54.074274 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:54 crc kubenswrapper[4824]: E0224 00:06:54.174716 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:54 crc kubenswrapper[4824]: E0224 00:06:54.274984 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:54 crc kubenswrapper[4824]: E0224 00:06:54.375690 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:54 crc kubenswrapper[4824]: E0224 00:06:54.476366 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:54 crc kubenswrapper[4824]: E0224 00:06:54.577393 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:54 crc kubenswrapper[4824]: E0224 00:06:54.678506 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:54 crc kubenswrapper[4824]: I0224 00:06:54.685431 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 10:03:38.360293656 +0000 UTC Feb 24 00:06:54 crc kubenswrapper[4824]: E0224 00:06:54.778628 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:54 crc kubenswrapper[4824]: E0224 00:06:54.879208 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:54 crc kubenswrapper[4824]: E0224 00:06:54.980216 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:55 crc kubenswrapper[4824]: E0224 00:06:55.080976 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:55 crc kubenswrapper[4824]: E0224 00:06:55.182129 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:55 crc kubenswrapper[4824]: E0224 00:06:55.282621 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:55 crc kubenswrapper[4824]: E0224 00:06:55.383321 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:55 crc kubenswrapper[4824]: E0224 00:06:55.484220 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:55 crc kubenswrapper[4824]: E0224 00:06:55.584742 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:55 crc kubenswrapper[4824]: E0224 00:06:55.685326 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:55 crc kubenswrapper[4824]: I0224 00:06:55.686453 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 20:57:31.036983447 +0000 UTC Feb 24 00:06:55 crc kubenswrapper[4824]: I0224 00:06:55.693820 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:55 crc kubenswrapper[4824]: I0224 00:06:55.695078 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:55 crc kubenswrapper[4824]: I0224 00:06:55.695126 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:55 crc kubenswrapper[4824]: I0224 00:06:55.695139 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:55 crc kubenswrapper[4824]: I0224 00:06:55.695828 4824 scope.go:117] "RemoveContainer" containerID="b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a" Feb 24 00:06:55 crc kubenswrapper[4824]: E0224 00:06:55.696013 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:06:55 crc kubenswrapper[4824]: E0224 00:06:55.786144 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:55 crc kubenswrapper[4824]: E0224 00:06:55.886643 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:55 crc kubenswrapper[4824]: E0224 00:06:55.987677 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:56 crc kubenswrapper[4824]: E0224 00:06:56.088119 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:56 crc kubenswrapper[4824]: E0224 00:06:56.188956 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:56 crc kubenswrapper[4824]: E0224 00:06:56.289560 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:56 crc kubenswrapper[4824]: E0224 00:06:56.390417 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:56 crc kubenswrapper[4824]: I0224 00:06:56.472090 4824 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 24 00:06:56 crc kubenswrapper[4824]: E0224 00:06:56.491065 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:56 crc kubenswrapper[4824]: E0224 00:06:56.591609 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:56 crc kubenswrapper[4824]: I0224 00:06:56.687212 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 17:30:25.447875698 +0000 UTC Feb 24 00:06:56 crc kubenswrapper[4824]: E0224 00:06:56.692370 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:56 crc kubenswrapper[4824]: I0224 00:06:56.693874 4824 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:56 crc kubenswrapper[4824]: I0224 00:06:56.695233 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:56 crc kubenswrapper[4824]: I0224 00:06:56.695272 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:56 crc kubenswrapper[4824]: I0224 00:06:56.695283 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:56 crc kubenswrapper[4824]: E0224 00:06:56.755302 4824 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:06:56 crc kubenswrapper[4824]: E0224 00:06:56.793478 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:56 crc kubenswrapper[4824]: E0224 00:06:56.894140 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:56 crc kubenswrapper[4824]: E0224 00:06:56.995202 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:57 crc kubenswrapper[4824]: E0224 00:06:57.095638 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:57 crc kubenswrapper[4824]: E0224 00:06:57.196660 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:57 crc kubenswrapper[4824]: E0224 00:06:57.297795 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:57 crc kubenswrapper[4824]: E0224 00:06:57.398384 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:57 crc kubenswrapper[4824]: E0224 00:06:57.499416 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:57 crc kubenswrapper[4824]: E0224 00:06:57.600579 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:57 crc kubenswrapper[4824]: I0224 00:06:57.688018 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:39:31.220267436 +0000 UTC Feb 24 00:06:57 crc kubenswrapper[4824]: E0224 00:06:57.701460 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:57 crc kubenswrapper[4824]: E0224 00:06:57.802282 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:57 crc kubenswrapper[4824]: E0224 00:06:57.903078 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:58 crc kubenswrapper[4824]: E0224 00:06:58.003236 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:58 crc kubenswrapper[4824]: E0224 00:06:58.103506 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:58 crc kubenswrapper[4824]: E0224 00:06:58.204575 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:58 crc kubenswrapper[4824]: E0224 00:06:58.305331 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:58 crc kubenswrapper[4824]: E0224 00:06:58.406057 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:58 crc kubenswrapper[4824]: E0224 00:06:58.506716 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:58 crc kubenswrapper[4824]: E0224 00:06:58.607807 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:58 crc kubenswrapper[4824]: I0224 00:06:58.688904 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:24:50.72989597 +0000 UTC Feb 24 00:06:58 crc kubenswrapper[4824]: E0224 00:06:58.708544 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:58 crc kubenswrapper[4824]: E0224 00:06:58.809623 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:58 crc kubenswrapper[4824]: E0224 00:06:58.910128 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:59 crc kubenswrapper[4824]: E0224 00:06:59.010261 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:59 crc kubenswrapper[4824]: E0224 00:06:59.110922 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:59 crc kubenswrapper[4824]: E0224 00:06:59.211746 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:59 crc kubenswrapper[4824]: E0224 00:06:59.312115 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:59 crc kubenswrapper[4824]: E0224 00:06:59.412784 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:59 crc kubenswrapper[4824]: E0224 00:06:59.513453 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:59 crc kubenswrapper[4824]: E0224 00:06:59.614469 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:59 crc kubenswrapper[4824]: I0224 00:06:59.689935 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:37:17.236355843 +0000 UTC Feb 24 00:06:59 crc kubenswrapper[4824]: E0224 00:06:59.714990 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:59 crc kubenswrapper[4824]: E0224 00:06:59.815966 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:06:59 crc kubenswrapper[4824]: E0224 00:06:59.916959 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.017594 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.118556 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.219296 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.255731 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.260757 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.260828 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.260851 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.260878 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.260899 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:00Z","lastTransitionTime":"2026-02-24T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.274766 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.279643 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.279720 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.279750 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.279778 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.279797 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:00Z","lastTransitionTime":"2026-02-24T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.299596 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.304796 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.304834 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.304845 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.305104 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.305136 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:00Z","lastTransitionTime":"2026-02-24T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.320243 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.325325 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.325378 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.325398 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.325423 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.325440 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:00Z","lastTransitionTime":"2026-02-24T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.339888 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.340030 4824 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.340056 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.440783 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.541490 4824 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.638754 4824 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.644064 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.644148 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.644162 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.644181 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.644195 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:00Z","lastTransitionTime":"2026-02-24T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.651371 4824 apiserver.go:52] "Watching apiserver" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.657950 4824 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.658203 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.658574 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.658732 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.658754 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.658784 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.658932 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.658884 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.659296 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.659450 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.659569 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.661267 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.661381 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.661577 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.661973 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.662071 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.662331 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.662399 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.662580 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.663510 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.690112 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:04:39.151490291 +0000 UTC Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.692763 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.708732 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.723187 4824 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.723313 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.739662 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.747414 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.747487 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.747506 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.747578 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.747609 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:00Z","lastTransitionTime":"2026-02-24T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.752725 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.764340 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.774244 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.808126 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.808200 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.808240 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.808877 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.808915 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.808668 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.808948 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.808743 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.808981 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809049 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809077 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809094 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809114 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809668 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809242 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809677 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809545 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809739 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809551 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809730 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809807 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809664 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809831 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809860 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809881 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809901 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809921 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809941 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.809988 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810007 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810031 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810050 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810068 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810092 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810114 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810133 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810151 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810170 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810190 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810210 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810228 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810250 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810267 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810261 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810376 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810284 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810467 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810546 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810596 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810631 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810667 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810702 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810738 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810778 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810814 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810847 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810877 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810909 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810940 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810973 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811011 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811048 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811085 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811127 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811166 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811203 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811240 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811271 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811297 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811329 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810351 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811367 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811410 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811445 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811479 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811513 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811568 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811600 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811631 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811665 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811697 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811730 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811764 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811794 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811825 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811858 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811973 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812009 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812041 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812070 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812130 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812163 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812203 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812235 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812267 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812304 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812339 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812370 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812403 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812433 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812464 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812494 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812554 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812588 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812626 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812655 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812689 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812720 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812751 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812782 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812813 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812847 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812890 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812926 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812958 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812993 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813026 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813059 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813092 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813128 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813170 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813203 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813232 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813265 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813308 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813338 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813367 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813401 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813435 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813466 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813497 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813552 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813587 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813614 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813642 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813671 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813703 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813733 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813762 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813795 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813831 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813867 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813957 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814020 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814058 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814091 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814126 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814156 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814194 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814233 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814268 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814304 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814342 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814380 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814413 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814450 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814489 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814564 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814609 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814645 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814680 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814715 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814750 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814788 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814825 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814859 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814892 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814928 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814961 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814998 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815033 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815067 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815098 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815127 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815156 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815308 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815351 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815385 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815420 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815454 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815493 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815556 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815588 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815617 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815651 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815690 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815727 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815760 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815793 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815825 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815860 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815894 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815937 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815974 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816009 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816042 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816079 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816114 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816152 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816194 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816234 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816265 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816307 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816343 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816376 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816405 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816487 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816559 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816604 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816646 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816694 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816733 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816781 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816816 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816860 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816904 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816943 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816980 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.817017 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.817056 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810274 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810633 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.810705 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811004 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811096 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811127 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811337 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811417 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811510 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811509 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811636 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811708 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.811935 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812176 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.812188 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813137 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813254 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.813853 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814293 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814344 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814646 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814768 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814800 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814940 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.814959 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815001 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815479 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815606 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815742 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.815916 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816107 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816113 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816125 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816322 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816676 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.816765 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.817055 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.817569 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.817634 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.817757 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.818066 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.818089 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.818145 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.818209 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.818257 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.818942 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.818975 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.819034 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.819444 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.819571 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.819590 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.819627 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.819879 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.819904 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.820190 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.820637 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.820780 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.820883 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.820952 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.820914 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.821213 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.821324 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.821739 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.821890 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.822336 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.822553 4824 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.822666 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:01.322630049 +0000 UTC m=+85.312254538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.822984 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.823003 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.823121 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.823210 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.823568 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.823622 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.823673 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.823816 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.824143 4824 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.824850 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.825374 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.825430 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.825629 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.825737 4824 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.825856 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:01.325834651 +0000 UTC m=+85.315459130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.825936 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.826163 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.826636 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.828362 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.828413 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.828763 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.828987 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.829288 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.829697 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.830202 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.830298 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.830401 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:07:01.330370648 +0000 UTC m=+85.319995117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.830592 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.830656 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.830726 4824 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.830835 4824 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.830861 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.830923 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.831073 4824 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.831103 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.831123 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.831139 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.831153 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.831167 4824 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.831189 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.831472 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.831567 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.832132 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.832245 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.832443 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.832697 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.832986 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.833204 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.833781 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.835845 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.837304 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.837674 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.838416 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.839070 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.839114 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.839207 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.839665 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.839798 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.840036 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.840106 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.840119 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.840607 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.840886 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.841320 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.841443 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.841450 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.841582 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.841334 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.842101 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.842132 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.842100 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.842153 4824 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.842244 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:01.342219383 +0000 UTC m=+85.331844062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.842420 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.842513 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.846157 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.848338 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.849064 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.849700 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.849953 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.849982 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.849999 4824 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:00 crc kubenswrapper[4824]: E0224 00:07:00.850092 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:01.350069755 +0000 UTC m=+85.339694244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.851245 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.851287 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.851302 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.851324 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.851343 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:00Z","lastTransitionTime":"2026-02-24T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.856798 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.857035 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.857055 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.857380 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.857981 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.858118 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.858131 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.858973 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.859201 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.859203 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.859290 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.859377 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.859387 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.859589 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.859620 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.859693 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.859694 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.859762 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.859819 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.860218 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.860358 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.861365 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.861550 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.861904 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.861983 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.862026 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.862029 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.861911 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.862284 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.862318 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.862466 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.862838 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.862897 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.863275 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.863325 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.863366 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.863417 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.864126 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.864183 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.864371 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.864206 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.864260 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.864636 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.864793 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.864897 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.865641 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.865903 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.865913 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.866009 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.866053 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.866164 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.866193 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.866214 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.866252 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.866266 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.866305 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.866515 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.866739 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.867236 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.867692 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.867919 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.868807 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.877809 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.884276 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.886546 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.901288 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.931740 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.931813 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.931898 4824 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.931927 4824 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.931946 4824 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.931962 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.931975 4824 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.931988 4824 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932001 4824 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932013 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932027 4824 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932040 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932053 4824 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932064 4824 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932077 4824 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932089 4824 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932102 4824 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932114 4824 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932126 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932139 4824 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932133 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932206 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932153 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932253 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932266 4824 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932280 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932294 4824 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932307 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932320 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932333 4824 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932349 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932362 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932374 4824 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932386 4824 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932398 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932410 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932425 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932439 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932452 4824 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932466 4824 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932478 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932489 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932502 4824 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932555 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932570 4824 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932584 4824 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932596 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932609 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932623 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932634 4824 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932647 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932659 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932671 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932683 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932695 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932707 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932719 4824 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932731 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932744 4824 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932756 4824 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932768 4824 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932779 4824 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932791 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932805 4824 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932816 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932829 4824 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932843 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932856 4824 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932868 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932880 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932892 4824 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932904 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932915 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932927 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932939 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932955 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932966 4824 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932978 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.932989 4824 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933001 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933016 4824 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933029 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933040 4824 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933052 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933063 4824 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933076 4824 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933087 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933100 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933112 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933123 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933135 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933147 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933158 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933169 4824 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933181 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933193 4824 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933204 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933215 4824 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933228 4824 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933240 4824 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933252 4824 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933265 4824 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933278 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933291 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933304 4824 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933315 4824 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933327 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933340 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933351 4824 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933363 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933435 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933465 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933480 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933501 4824 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933513 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933554 4824 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933570 4824 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933583 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933597 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933614 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933628 4824 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933641 4824 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933654 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933668 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933685 4824 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933704 4824 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933717 4824 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933732 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933751 4824 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933768 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933785 4824 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933798 4824 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933810 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933824 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933838 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933852 4824 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933865 4824 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933879 4824 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933891 4824 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933903 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933919 4824 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933932 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933945 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933958 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933970 4824 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933981 4824 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.933993 4824 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934004 4824 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934016 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934029 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934040 4824 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934051 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934072 4824 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934084 4824 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934096 4824 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934107 4824 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934119 4824 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934136 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934148 4824 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934159 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934171 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934215 4824 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934229 4824 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934244 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934256 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934268 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934280 4824 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934291 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934303 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934315 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934327 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934340 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934352 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934364 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934376 4824 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934388 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934400 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934412 4824 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934424 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934437 4824 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934449 4824 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934461 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934473 4824 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934486 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934498 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934510 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.934546 4824 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.955401 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.955462 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.955478 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.955499 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.955512 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:00Z","lastTransitionTime":"2026-02-24T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.981383 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:07:00 crc kubenswrapper[4824]: I0224 00:07:00.996136 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.003368 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.004442 4824 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 00:07:01 crc kubenswrapper[4824]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 00:07:01 crc kubenswrapper[4824]: if [[ -f "/env/_master" ]]; then Feb 24 00:07:01 crc kubenswrapper[4824]: set -o allexport Feb 24 00:07:01 crc kubenswrapper[4824]: source "/env/_master" Feb 24 00:07:01 crc kubenswrapper[4824]: set +o allexport Feb 24 00:07:01 crc kubenswrapper[4824]: fi Feb 24 00:07:01 crc kubenswrapper[4824]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 24 00:07:01 crc kubenswrapper[4824]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 24 00:07:01 crc kubenswrapper[4824]: ho_enable="--enable-hybrid-overlay" Feb 24 00:07:01 crc kubenswrapper[4824]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 24 00:07:01 crc kubenswrapper[4824]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 24 00:07:01 crc kubenswrapper[4824]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 24 00:07:01 crc kubenswrapper[4824]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 00:07:01 crc kubenswrapper[4824]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 24 00:07:01 crc kubenswrapper[4824]: --webhook-host=127.0.0.1 \ Feb 24 00:07:01 crc kubenswrapper[4824]: --webhook-port=9743 \ Feb 24 00:07:01 crc kubenswrapper[4824]: ${ho_enable} \ Feb 24 00:07:01 crc kubenswrapper[4824]: --enable-interconnect \ Feb 24 00:07:01 crc kubenswrapper[4824]: --disable-approver \ Feb 24 00:07:01 crc kubenswrapper[4824]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 24 00:07:01 crc kubenswrapper[4824]: --wait-for-kubernetes-api=200s \ Feb 24 00:07:01 crc kubenswrapper[4824]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 24 00:07:01 crc kubenswrapper[4824]: --loglevel="${LOGLEVEL}" Feb 24 00:07:01 crc kubenswrapper[4824]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 00:07:01 crc kubenswrapper[4824]: > logger="UnhandledError" Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.013307 4824 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 00:07:01 crc kubenswrapper[4824]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 00:07:01 crc kubenswrapper[4824]: if [[ -f "/env/_master" ]]; then Feb 24 00:07:01 crc kubenswrapper[4824]: set -o allexport Feb 24 00:07:01 crc kubenswrapper[4824]: source "/env/_master" Feb 24 00:07:01 crc kubenswrapper[4824]: set +o allexport Feb 24 00:07:01 crc kubenswrapper[4824]: fi Feb 24 00:07:01 crc kubenswrapper[4824]: Feb 24 00:07:01 crc kubenswrapper[4824]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 24 00:07:01 crc kubenswrapper[4824]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 00:07:01 crc kubenswrapper[4824]: --disable-webhook \ Feb 24 00:07:01 crc kubenswrapper[4824]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 24 00:07:01 crc kubenswrapper[4824]: --loglevel="${LOGLEVEL}" Feb 24 00:07:01 crc kubenswrapper[4824]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 00:07:01 crc kubenswrapper[4824]: > logger="UnhandledError" Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.014565 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.021837 4824 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.023072 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 24 00:07:01 crc kubenswrapper[4824]: W0224 00:07:01.023526 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-b39afc9921960eda1889ca26d8c3baefcdafc5477509b6dda05ed54213eb5e89 WatchSource:0}: Error finding container b39afc9921960eda1889ca26d8c3baefcdafc5477509b6dda05ed54213eb5e89: Status 404 returned error can't find the container with id b39afc9921960eda1889ca26d8c3baefcdafc5477509b6dda05ed54213eb5e89 Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.027085 4824 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 00:07:01 crc kubenswrapper[4824]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 24 00:07:01 crc kubenswrapper[4824]: set -o allexport Feb 24 00:07:01 crc kubenswrapper[4824]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 24 00:07:01 crc kubenswrapper[4824]: source /etc/kubernetes/apiserver-url.env Feb 24 00:07:01 crc kubenswrapper[4824]: else Feb 24 00:07:01 crc kubenswrapper[4824]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 24 00:07:01 crc kubenswrapper[4824]: exit 1 Feb 24 00:07:01 crc kubenswrapper[4824]: fi Feb 24 00:07:01 crc kubenswrapper[4824]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 24 00:07:01 crc kubenswrapper[4824]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 00:07:01 crc kubenswrapper[4824]: > logger="UnhandledError" Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.028207 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.058644 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.058697 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.058722 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.058750 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.058769 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.061422 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2538f68b39e89c4c421e500169ad97f720f05856bf8a53b21bdbe1c1af3454fd"} Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.063198 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2cebbd1b17e2d15613f1695389433d59bfc40c927a97a476b3d61d4415972ee2"} Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.063654 4824 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.065091 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b39afc9921960eda1889ca26d8c3baefcdafc5477509b6dda05ed54213eb5e89"} Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.065778 4824 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 00:07:01 crc kubenswrapper[4824]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 00:07:01 crc kubenswrapper[4824]: if [[ -f "/env/_master" ]]; then Feb 24 00:07:01 crc kubenswrapper[4824]: set -o allexport Feb 24 00:07:01 crc kubenswrapper[4824]: source "/env/_master" Feb 24 00:07:01 crc kubenswrapper[4824]: set +o allexport Feb 24 00:07:01 crc kubenswrapper[4824]: fi Feb 24 00:07:01 crc kubenswrapper[4824]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 24 00:07:01 crc kubenswrapper[4824]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 24 00:07:01 crc kubenswrapper[4824]: ho_enable="--enable-hybrid-overlay" Feb 24 00:07:01 crc kubenswrapper[4824]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 24 00:07:01 crc kubenswrapper[4824]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 24 00:07:01 crc kubenswrapper[4824]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 24 00:07:01 crc kubenswrapper[4824]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 00:07:01 crc kubenswrapper[4824]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 24 00:07:01 crc kubenswrapper[4824]: --webhook-host=127.0.0.1 \ Feb 24 00:07:01 crc kubenswrapper[4824]: --webhook-port=9743 \ Feb 24 00:07:01 crc kubenswrapper[4824]: ${ho_enable} \ Feb 24 00:07:01 crc kubenswrapper[4824]: --enable-interconnect \ Feb 24 00:07:01 crc kubenswrapper[4824]: --disable-approver \ Feb 24 00:07:01 crc kubenswrapper[4824]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 24 00:07:01 crc kubenswrapper[4824]: --wait-for-kubernetes-api=200s \ Feb 24 00:07:01 crc kubenswrapper[4824]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 24 00:07:01 crc kubenswrapper[4824]: --loglevel="${LOGLEVEL}" Feb 24 00:07:01 crc kubenswrapper[4824]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 00:07:01 crc kubenswrapper[4824]: > logger="UnhandledError" Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.065917 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.066653 4824 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 00:07:01 crc kubenswrapper[4824]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 24 00:07:01 crc kubenswrapper[4824]: set -o allexport Feb 24 00:07:01 crc kubenswrapper[4824]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 24 00:07:01 crc kubenswrapper[4824]: source /etc/kubernetes/apiserver-url.env Feb 24 00:07:01 crc kubenswrapper[4824]: else Feb 24 00:07:01 crc kubenswrapper[4824]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 24 00:07:01 crc kubenswrapper[4824]: exit 1 Feb 24 00:07:01 crc kubenswrapper[4824]: fi Feb 24 00:07:01 crc kubenswrapper[4824]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 24 00:07:01 crc kubenswrapper[4824]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 00:07:01 crc kubenswrapper[4824]: > logger="UnhandledError" Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.067738 4824 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 00:07:01 crc kubenswrapper[4824]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 00:07:01 crc kubenswrapper[4824]: if [[ -f "/env/_master" ]]; then Feb 24 00:07:01 crc kubenswrapper[4824]: set -o allexport Feb 24 00:07:01 crc kubenswrapper[4824]: source "/env/_master" Feb 24 00:07:01 crc kubenswrapper[4824]: set +o allexport Feb 24 00:07:01 crc kubenswrapper[4824]: fi Feb 24 00:07:01 crc kubenswrapper[4824]: Feb 24 00:07:01 crc kubenswrapper[4824]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 24 00:07:01 crc kubenswrapper[4824]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 00:07:01 crc kubenswrapper[4824]: --disable-webhook \ Feb 24 00:07:01 crc kubenswrapper[4824]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 24 00:07:01 crc kubenswrapper[4824]: --loglevel="${LOGLEVEL}" Feb 24 00:07:01 crc kubenswrapper[4824]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 00:07:01 crc kubenswrapper[4824]: > logger="UnhandledError" Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.067830 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.069041 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.075475 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.089415 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.107405 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.117649 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.130344 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.143528 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.153207 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.162150 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.163387 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.163501 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.163594 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.163694 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.163788 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.172928 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.182202 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.191886 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.204387 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.267747 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.267816 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.267828 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.267848 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.267861 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.338743 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.338859 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.338923 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.339021 4824 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.339039 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:07:02.33900453 +0000 UTC m=+86.328628999 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.339074 4824 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.339102 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:02.339082262 +0000 UTC m=+86.328706731 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.339120 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:02.339109713 +0000 UTC m=+86.328734182 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.370369 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.370416 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.370430 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.370450 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.370463 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.439658 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.439725 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.439955 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.439981 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.439997 4824 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.439992 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.440068 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:02.440049759 +0000 UTC m=+86.429674238 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.440071 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.440092 4824 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.440177 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:02.440150981 +0000 UTC m=+86.429775490 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.475042 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.475096 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.475106 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.475162 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.475174 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.578023 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.578078 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.578093 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.578114 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.578128 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.681440 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.681494 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.681507 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.681543 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.681554 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.690718 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 11:09:28.567422229 +0000 UTC Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.693017 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:01 crc kubenswrapper[4824]: E0224 00:07:01.693173 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.783873 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.783953 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.783964 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.783982 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.783994 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.886578 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.886641 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.886658 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.886687 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.886707 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.989271 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.989343 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.989353 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.989372 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4824]: I0224 00:07:01.989382 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.092757 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.093107 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.093262 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.093354 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.093444 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.195958 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.196620 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.196650 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.196674 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.196692 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.300158 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.300201 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.300215 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.300233 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.300244 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.349251 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.349356 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.349391 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:02 crc kubenswrapper[4824]: E0224 00:07:02.349506 4824 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:07:02 crc kubenswrapper[4824]: E0224 00:07:02.349599 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:07:04.34955174 +0000 UTC m=+88.339176219 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:07:02 crc kubenswrapper[4824]: E0224 00:07:02.349660 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:04.349644712 +0000 UTC m=+88.339269271 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:07:02 crc kubenswrapper[4824]: E0224 00:07:02.349753 4824 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:07:02 crc kubenswrapper[4824]: E0224 00:07:02.349938 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:04.349889978 +0000 UTC m=+88.339514577 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.403685 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.403735 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.403750 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.403768 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.403778 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.450571 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.450652 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:02 crc kubenswrapper[4824]: E0224 00:07:02.450893 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:07:02 crc kubenswrapper[4824]: E0224 00:07:02.450915 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:07:02 crc kubenswrapper[4824]: E0224 00:07:02.450910 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:07:02 crc kubenswrapper[4824]: E0224 00:07:02.450982 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:07:02 crc kubenswrapper[4824]: E0224 00:07:02.451007 4824 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:02 crc kubenswrapper[4824]: E0224 00:07:02.450928 4824 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:02 crc kubenswrapper[4824]: E0224 00:07:02.451106 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:04.451077331 +0000 UTC m=+88.440701840 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:02 crc kubenswrapper[4824]: E0224 00:07:02.451140 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:04.451126302 +0000 UTC m=+88.440750811 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.508321 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.508402 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.508415 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.508459 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.508473 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.590349 4824 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.612362 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.612433 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.612452 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.612481 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.612502 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.691318 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:13:43.475245706 +0000 UTC Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.693918 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:02 crc kubenswrapper[4824]: E0224 00:07:02.694194 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.694390 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:02 crc kubenswrapper[4824]: E0224 00:07:02.694707 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.701178 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.702171 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.704067 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.705894 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.708642 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.710069 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.711576 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.714926 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.716017 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.716257 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.716459 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.716725 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.716949 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.717331 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.719456 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.720700 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.722323 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.723506 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.724671 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.728209 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.730741 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.733789 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.734703 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.736120 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.738609 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.740007 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.742575 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.743622 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.746036 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.747135 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.749542 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.751278 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.752595 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.754483 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.755738 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.757058 4824 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.757240 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.759757 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.761284 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.761891 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.764018 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.764925 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.766738 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.768349 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.771664 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.772976 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.775645 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.777222 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.778872 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.780055 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.781387 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.782645 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.784355 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.785831 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.786976 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.788840 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.789804 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.790858 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.791627 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.821844 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.822228 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.822426 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.822628 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.822786 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.926556 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.926637 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.926647 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.926667 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4824]: I0224 00:07:02.926695 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.030329 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.030418 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.030440 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.030474 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.030500 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.134408 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.134502 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.134561 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.134598 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.134623 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.238027 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.238083 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.238093 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.238114 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.238127 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.341176 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.341222 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.341231 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.341247 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.341258 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.444838 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.444894 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.444908 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.444927 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.444941 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.548728 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.548799 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.548817 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.548845 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.548863 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.651729 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.651788 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.651806 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.651829 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.651847 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.692110 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 03:31:39.497937442 +0000 UTC Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.693415 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:03 crc kubenswrapper[4824]: E0224 00:07:03.693670 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.754974 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.755014 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.755024 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.755041 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.755050 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.858414 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.858473 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.858488 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.858507 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.858574 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.962100 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.962169 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.962188 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.962216 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4824]: I0224 00:07:03.962234 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.065772 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.065827 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.065842 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.065868 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.065894 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.170166 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.170240 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.170249 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.170265 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.170277 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.273226 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.273271 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.273284 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.273303 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.273318 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.369991 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.370088 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:04 crc kubenswrapper[4824]: E0224 00:07:04.370130 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:07:08.370091327 +0000 UTC m=+92.359715816 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.370164 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:04 crc kubenswrapper[4824]: E0224 00:07:04.370218 4824 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:07:04 crc kubenswrapper[4824]: E0224 00:07:04.370268 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:08.370254202 +0000 UTC m=+92.359878691 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:07:04 crc kubenswrapper[4824]: E0224 00:07:04.370301 4824 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:07:04 crc kubenswrapper[4824]: E0224 00:07:04.370427 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:08.370394055 +0000 UTC m=+92.360018534 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.376497 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.376543 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.376554 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.376573 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.376592 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.471732 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.471793 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:04 crc kubenswrapper[4824]: E0224 00:07:04.472046 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:07:04 crc kubenswrapper[4824]: E0224 00:07:04.472104 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:07:04 crc kubenswrapper[4824]: E0224 00:07:04.472120 4824 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:04 crc kubenswrapper[4824]: E0224 00:07:04.472211 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:08.472190083 +0000 UTC m=+92.461814572 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:04 crc kubenswrapper[4824]: E0224 00:07:04.472750 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:07:04 crc kubenswrapper[4824]: E0224 00:07:04.472902 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:07:04 crc kubenswrapper[4824]: E0224 00:07:04.472995 4824 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:04 crc kubenswrapper[4824]: E0224 00:07:04.473153 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:08.473129027 +0000 UTC m=+92.462753506 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.479692 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.479745 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.479760 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.479782 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.479795 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.582975 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.583057 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.583071 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.583628 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.583765 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.686973 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.687063 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.687082 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.687111 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.687132 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.693222 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:04:25.431215024 +0000 UTC Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.693457 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.693512 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:04 crc kubenswrapper[4824]: E0224 00:07:04.693887 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:04 crc kubenswrapper[4824]: E0224 00:07:04.693899 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.790868 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.790918 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.790936 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.790957 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.790969 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.893566 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.893610 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.893622 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.893640 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.893651 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.996333 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.996376 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.996389 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.996406 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4824]: I0224 00:07:04.996416 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.099161 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.099219 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.099235 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.099258 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.099274 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.202704 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.202747 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.202759 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.202775 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.202784 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.306329 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.306385 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.306397 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.306426 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.306441 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.409391 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.409491 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.409549 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.409590 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.409618 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.512658 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.512712 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.512724 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.512745 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.512759 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.615297 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.615338 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.615347 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.615363 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.615373 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.692859 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:05 crc kubenswrapper[4824]: E0224 00:07:05.693043 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.693921 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:42:35.917051813 +0000 UTC Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.718873 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.718947 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.718966 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.718996 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.719014 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.822271 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.822346 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.822372 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.822404 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.822430 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.924899 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.924944 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.924955 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.924971 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4824]: I0224 00:07:05.924982 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.027936 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.027988 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.027999 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.028015 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.028027 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.131197 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.131245 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.131258 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.131277 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.131290 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.233882 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.233940 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.233953 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.233971 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.233986 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.337229 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.337775 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.337928 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.338080 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.338222 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.442509 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.442585 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.442595 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.442620 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.442633 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.545502 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.545580 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.545596 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.545619 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.545635 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.649603 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.649888 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.649956 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.650034 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.650164 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.693309 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.693430 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.694276 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 23:22:08.239732663 +0000 UTC Feb 24 00:07:06 crc kubenswrapper[4824]: E0224 00:07:06.694172 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:06 crc kubenswrapper[4824]: E0224 00:07:06.694354 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.705696 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.715271 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.722979 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.733318 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.742849 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.752398 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.752430 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.752440 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.752471 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.752482 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.753684 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.856296 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.856336 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.856347 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.856364 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.856376 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.959036 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.959118 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.959128 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.959143 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4824]: I0224 00:07:06.959156 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.063118 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.063153 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.063162 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.063177 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.063191 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.165552 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.165906 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.166113 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.166220 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.166284 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.268854 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.268917 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.268936 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.268962 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.268982 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.371320 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.371708 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.371887 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.372010 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.372115 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.475025 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.475081 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.475094 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.475112 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.475124 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.578801 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.579218 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.579290 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.579362 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.579466 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.682350 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.682409 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.682423 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.682444 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.682459 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.693534 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:07 crc kubenswrapper[4824]: E0224 00:07:07.694201 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.694409 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 01:03:42.991021463 +0000 UTC Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.727581 4824 scope.go:117] "RemoveContainer" containerID="b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a" Feb 24 00:07:07 crc kubenswrapper[4824]: E0224 00:07:07.727907 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.728513 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.785889 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.785976 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.786039 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.786073 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.786145 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.889274 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.889352 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.889366 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.889463 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.889483 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.995087 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.995454 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.995597 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.995693 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4824]: I0224 00:07:07.995782 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.085753 4824 scope.go:117] "RemoveContainer" containerID="b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a" Feb 24 00:07:08 crc kubenswrapper[4824]: E0224 00:07:08.085978 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.098906 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.098949 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.098963 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.098981 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.098993 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.201887 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.201930 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.201939 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.201955 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.201970 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.304924 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.304975 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.304989 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.305011 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.305023 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.407771 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.407883 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:08 crc kubenswrapper[4824]: E0224 00:07:08.407985 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:07:16.40795859 +0000 UTC m=+100.397583069 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:07:08 crc kubenswrapper[4824]: E0224 00:07:08.408056 4824 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:07:08 crc kubenswrapper[4824]: E0224 00:07:08.408120 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:16.408102764 +0000 UTC m=+100.397727253 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.408056 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.408055 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4824]: E0224 00:07:08.408156 4824 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.408186 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.408204 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4824]: E0224 00:07:08.408213 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:16.408199727 +0000 UTC m=+100.397824206 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.408228 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.408245 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.509460 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.509585 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:08 crc kubenswrapper[4824]: E0224 00:07:08.509739 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:07:08 crc kubenswrapper[4824]: E0224 00:07:08.509783 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:07:08 crc kubenswrapper[4824]: E0224 00:07:08.509800 4824 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:08 crc kubenswrapper[4824]: E0224 00:07:08.509861 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:07:08 crc kubenswrapper[4824]: E0224 00:07:08.509876 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:16.509855351 +0000 UTC m=+100.499479840 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:08 crc kubenswrapper[4824]: E0224 00:07:08.509903 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:07:08 crc kubenswrapper[4824]: E0224 00:07:08.509918 4824 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:08 crc kubenswrapper[4824]: E0224 00:07:08.509974 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:16.509956034 +0000 UTC m=+100.499580503 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.511002 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.511031 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.511041 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.511055 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.511064 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.613703 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.614148 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.614302 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.614467 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.614652 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.693107 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.693201 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:08 crc kubenswrapper[4824]: E0224 00:07:08.694361 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:08 crc kubenswrapper[4824]: E0224 00:07:08.694571 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.694488 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:14:03.124980166 +0000 UTC Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.717973 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.718194 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.718205 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.718250 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.718264 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.821378 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.821426 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.821439 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.821466 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.821480 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.924066 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.924116 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.924147 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.924166 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4824]: I0224 00:07:08.924176 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.027198 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.027589 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.027678 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.027781 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.027883 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.131305 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.131810 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.132029 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.132210 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.132394 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.236336 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.236389 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.236399 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.236419 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.236439 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.338804 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.338838 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.338847 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.338862 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.338875 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.441962 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.442042 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.442067 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.442104 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.442129 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.544861 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.545035 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.545062 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.545096 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.545119 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.648983 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.649066 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.649084 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.649102 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.649114 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.692752 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:09 crc kubenswrapper[4824]: E0224 00:07:09.692987 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.695783 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 01:55:50.967137565 +0000 UTC Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.752511 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.752582 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.752623 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.752646 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.752663 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.855180 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.855272 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.855286 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.855308 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.855323 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.958409 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.958498 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.958542 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.958574 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4824]: I0224 00:07:09.958592 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.061440 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.061933 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.062028 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.062130 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.062209 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.164859 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.164903 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.164919 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.164939 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.164952 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.267423 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.267756 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.267823 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.267950 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.268042 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.370824 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.371174 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.371238 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.371309 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.371393 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.466619 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.466722 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.466746 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.466779 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.466808 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.483317 4824 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 00:07:10 crc kubenswrapper[4824]: E0224 00:07:10.486935 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.493038 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.493169 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.493188 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.493214 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.493231 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4824]: E0224 00:07:10.514580 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.520590 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.520644 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.520660 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.520680 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.520694 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4824]: E0224 00:07:10.537687 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.543019 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.543078 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.543096 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.543124 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.543145 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4824]: E0224 00:07:10.561693 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.567344 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.567404 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.567417 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.567443 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.567459 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4824]: E0224 00:07:10.583094 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:10 crc kubenswrapper[4824]: E0224 00:07:10.583292 4824 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.585504 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.585603 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.585628 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.585665 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.585687 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.689958 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.690026 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.690038 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.690055 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.690070 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.693806 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.693920 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:10 crc kubenswrapper[4824]: E0224 00:07:10.693951 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:10 crc kubenswrapper[4824]: E0224 00:07:10.694194 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.696496 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 03:17:48.634389171 +0000 UTC Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.793254 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.793327 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.793343 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.793367 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.793388 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.895252 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.895295 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.895305 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.895321 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.895330 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.998025 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.998063 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.998073 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.998090 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4824]: I0224 00:07:10.998100 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.100293 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.100344 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.100357 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.100377 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.100394 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.203283 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.203369 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.203395 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.203436 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.203462 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.306448 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.306499 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.306512 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.306548 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.306563 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.410289 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.410360 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.410380 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.410408 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.410430 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.513152 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.513206 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.513222 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.513246 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.513260 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.616669 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.616746 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.616769 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.616802 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.616825 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.692846 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:11 crc kubenswrapper[4824]: E0224 00:07:11.693436 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.696929 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:09:05.26078533 +0000 UTC Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.720557 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.720604 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.720621 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.720647 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.720664 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.823236 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.823299 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.823435 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.823467 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.823488 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.926988 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.927047 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.927057 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.927075 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4824]: I0224 00:07:11.927086 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.029358 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.029433 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.029459 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.029491 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.029569 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.131492 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.131990 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.132154 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.132345 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.132523 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.236290 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.236856 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.237078 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.237309 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.237741 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.341131 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.341187 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.341199 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.341220 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.341234 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.443900 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.444391 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.444572 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.444734 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.444877 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.547691 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.547762 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.547781 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.547809 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.547830 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.651215 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.651287 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.651312 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.651344 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.651372 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.693510 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.693605 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:12 crc kubenswrapper[4824]: E0224 00:07:12.693790 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:12 crc kubenswrapper[4824]: E0224 00:07:12.694243 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.699025 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 16:24:54.695768496 +0000 UTC Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.753858 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.753917 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.753931 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.753952 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.753971 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.858002 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.858094 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.858109 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.858132 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.858148 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.962094 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.962149 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.962163 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.962185 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4824]: I0224 00:07:12.962197 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.065455 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.065567 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.065588 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.065617 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.065641 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.168050 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.168114 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.168133 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.168162 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.168182 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.271695 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.271811 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.271841 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.271877 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.271901 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.375054 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.375150 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.375186 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.375221 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.375249 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.478703 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.478801 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.478826 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.478852 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.478872 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.581170 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.581254 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.581280 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.581316 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.581343 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.685235 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.685322 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.685349 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.685382 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.685410 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.692849 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:13 crc kubenswrapper[4824]: E0224 00:07:13.693288 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.699639 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 04:49:03.415965624 +0000 UTC Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.787933 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.787994 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.788013 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.788042 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.788062 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.891480 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.891565 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.891579 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.891715 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.891729 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.994542 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.994576 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.994585 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.994601 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4824]: I0224 00:07:13.994614 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.097015 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.097053 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.097063 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.097079 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.097090 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.106559 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95"} Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.106645 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329"} Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.125435 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.140636 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.152565 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.166742 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.178492 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.191660 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.199872 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.199921 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.199940 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.199985 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.200004 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.206297 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.303008 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.303066 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.303077 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.303097 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.303112 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.405624 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.405721 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.405733 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.405751 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.405762 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.509356 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.509394 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.509404 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.509421 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.509431 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.613364 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.613444 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.613482 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.613514 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.613585 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.693288 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.693340 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:14 crc kubenswrapper[4824]: E0224 00:07:14.693426 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:14 crc kubenswrapper[4824]: E0224 00:07:14.693688 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.700027 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 10:23:53.533786416 +0000 UTC Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.720457 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.720559 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.720583 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.720613 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.720639 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.823769 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.823808 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.823817 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.823836 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.823848 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.927449 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.927975 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.928153 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.928331 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4824]: I0224 00:07:14.928495 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.032455 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.033111 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.033349 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.033601 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.033824 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.137115 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.137181 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.137200 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.137226 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.137246 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.240686 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.240765 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.240791 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.240823 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.240848 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.344450 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.344560 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.344578 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.344613 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.344637 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.447986 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.448044 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.448057 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.448078 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.448095 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.551373 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.551456 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.551479 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.551507 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.551573 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.654681 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.654730 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.654744 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.654763 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.654778 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.693390 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:15 crc kubenswrapper[4824]: E0224 00:07:15.693554 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.700251 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:22:55.159844998 +0000 UTC Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.758616 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.758662 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.758673 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.758694 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.758706 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.866044 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.866206 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.866220 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.866242 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.866265 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.969544 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.969597 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.969607 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.969629 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4824]: I0224 00:07:15.969642 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.073084 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.073621 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.073635 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.073659 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.073677 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.176357 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.176434 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.176462 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.176495 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.176548 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.280850 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.280909 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.280922 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.280944 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.280961 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.384765 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.384813 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.384825 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.384843 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.384857 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.488714 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.488791 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.488815 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.488845 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.488868 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.491425 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.491612 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:16 crc kubenswrapper[4824]: E0224 00:07:16.491685 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:07:32.49165666 +0000 UTC m=+116.481281159 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.491735 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:16 crc kubenswrapper[4824]: E0224 00:07:16.491875 4824 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:07:16 crc kubenswrapper[4824]: E0224 00:07:16.491883 4824 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:07:16 crc kubenswrapper[4824]: E0224 00:07:16.491970 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:32.491946648 +0000 UTC m=+116.481571147 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:07:16 crc kubenswrapper[4824]: E0224 00:07:16.492009 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:32.491991649 +0000 UTC m=+116.481616158 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.591913 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.591971 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.591992 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.592022 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.592044 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.592258 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.592320 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:16 crc kubenswrapper[4824]: E0224 00:07:16.592600 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:07:16 crc kubenswrapper[4824]: E0224 00:07:16.592643 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:07:16 crc kubenswrapper[4824]: E0224 00:07:16.592669 4824 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:16 crc kubenswrapper[4824]: E0224 00:07:16.592771 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:32.59273762 +0000 UTC m=+116.582362129 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:16 crc kubenswrapper[4824]: E0224 00:07:16.592811 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:07:16 crc kubenswrapper[4824]: E0224 00:07:16.592850 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:07:16 crc kubenswrapper[4824]: E0224 00:07:16.592874 4824 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:16 crc kubenswrapper[4824]: E0224 00:07:16.592950 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:32.592926585 +0000 UTC m=+116.582551084 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.693412 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.693396 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:16 crc kubenswrapper[4824]: E0224 00:07:16.694014 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:16 crc kubenswrapper[4824]: E0224 00:07:16.694132 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.695204 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.695270 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.695290 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.695348 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.695362 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.700901 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 03:46:23.399819054 +0000 UTC Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.712868 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.726033 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.741858 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.758870 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.774543 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.785347 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.798680 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.798728 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.798740 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.798760 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.798773 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.799680 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.901820 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.901892 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.901904 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.901922 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4824]: I0224 00:07:16.901933 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.005173 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.005238 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.005256 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.005281 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.005300 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.109560 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.109621 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.109636 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.109658 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.109674 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.120151 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486"} Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.134726 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:17Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.149685 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:17Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.160733 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:17Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.178833 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:17Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.193492 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:17Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.208701 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:17Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.213824 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.213873 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.213906 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.213924 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.213935 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.229627 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:17Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.317317 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.317384 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.317402 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.317425 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.317441 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.420481 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.420583 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.420602 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.420629 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.420649 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.524125 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.524173 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.524186 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.524206 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.524224 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.627346 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.627382 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.627391 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.627407 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.627417 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.693088 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:17 crc kubenswrapper[4824]: E0224 00:07:17.693300 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.701411 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:15:04.877651357 +0000 UTC Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.730796 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.730842 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.730854 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.730873 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.730886 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.833611 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.833662 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.833673 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.833690 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.833701 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.936901 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.936977 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.937012 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.937042 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4824]: I0224 00:07:17.937054 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.039611 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.039667 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.039678 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.039706 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.039725 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.125155 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057"} Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.142677 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.142752 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.142772 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.142802 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.142823 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.143267 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.160158 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.174723 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.190399 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.208997 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.229013 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.246874 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.246957 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.246976 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.247035 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.247055 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.250151 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:18Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.350663 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.350718 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.350728 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.350746 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.350758 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.454635 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.454713 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.454736 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.454768 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.454793 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.557372 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.557443 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.557467 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.557497 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.557548 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.660044 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.660104 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.660127 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.660154 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.660171 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.693756 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.693821 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:18 crc kubenswrapper[4824]: E0224 00:07:18.693975 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:18 crc kubenswrapper[4824]: E0224 00:07:18.694103 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.702267 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 07:15:25.610795511 +0000 UTC Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.763575 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.763630 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.763647 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.763669 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.763685 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.866943 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.867399 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.867583 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.867762 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.867916 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.971215 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.971261 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.971272 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.971291 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4824]: I0224 00:07:18.971304 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.074727 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.074814 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.074842 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.074877 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.074897 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:19Z","lastTransitionTime":"2026-02-24T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.178636 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.179032 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.179253 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.179515 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.179745 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:19Z","lastTransitionTime":"2026-02-24T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.283031 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.283835 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.283868 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.283901 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.283917 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:19Z","lastTransitionTime":"2026-02-24T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.387252 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.387318 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.387335 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.387398 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.387417 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:19Z","lastTransitionTime":"2026-02-24T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.491653 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.491733 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.491778 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.491823 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.491848 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:19Z","lastTransitionTime":"2026-02-24T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.595351 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.595399 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.595411 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.595440 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.595452 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:19Z","lastTransitionTime":"2026-02-24T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.693717 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:19 crc kubenswrapper[4824]: E0224 00:07:19.693874 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.698129 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.698191 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.698206 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.698229 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.698246 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:19Z","lastTransitionTime":"2026-02-24T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.702749 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:56:45.697461011 +0000 UTC Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.800390 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.800485 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.800548 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.800591 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.800612 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:19Z","lastTransitionTime":"2026-02-24T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.903969 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.904042 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.904062 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.904089 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:19 crc kubenswrapper[4824]: I0224 00:07:19.904109 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:19Z","lastTransitionTime":"2026-02-24T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.007767 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.007838 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.007854 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.007876 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.007895 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.111556 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.111644 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.111663 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.111694 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.111714 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.215619 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.215711 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.215761 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.215785 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.215835 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.319510 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.319573 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.319583 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.319603 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.319616 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.423666 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.423747 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.423769 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.423795 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.423818 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.527940 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.527998 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.528014 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.528055 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.528075 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.631954 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.632103 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.632132 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.632171 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.632197 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.693256 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.693323 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:20 crc kubenswrapper[4824]: E0224 00:07:20.693457 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:20 crc kubenswrapper[4824]: E0224 00:07:20.693717 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.702851 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 03:42:42.009627066 +0000 UTC Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.735960 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.736025 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.736049 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.736082 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.736106 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.838890 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.838940 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.838952 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.838972 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.838986 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.840220 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.840273 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.840283 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.840300 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.840315 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4824]: E0224 00:07:20.862034 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.867395 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.867433 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.867443 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.867460 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.867469 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4824]: E0224 00:07:20.885947 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.891031 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.891085 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.891100 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.891122 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.891135 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4824]: E0224 00:07:20.909470 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.913381 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.913449 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.913467 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.913494 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.913514 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4824]: E0224 00:07:20.934366 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.939558 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.939602 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.939614 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.939633 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.939645 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4824]: E0224 00:07:20.955727 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:20 crc kubenswrapper[4824]: E0224 00:07:20.955892 4824 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.958660 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.958715 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.958740 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.958761 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4824]: I0224 00:07:20.958778 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.061876 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.061927 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.061938 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.061959 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.061974 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.164195 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.164253 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.164272 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.164298 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.164318 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.267000 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.267090 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.267109 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.267134 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.267152 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.370569 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.370623 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.370636 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.370656 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.370668 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.473648 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.473692 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.473702 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.473725 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.473738 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.575940 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.576249 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.576348 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.576446 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.576552 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.680272 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.680324 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.680346 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.680373 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.680395 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.693302 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:21 crc kubenswrapper[4824]: E0224 00:07:21.693509 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.703613 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 17:54:18.704585546 +0000 UTC Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.783554 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.784039 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.784197 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.784348 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.784562 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.888297 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.888368 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.888382 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.888402 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.888418 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.991911 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.991968 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.991992 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.992022 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4824]: I0224 00:07:21.992047 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.095969 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.096053 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.096072 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.096098 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.096119 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:22Z","lastTransitionTime":"2026-02-24T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.199813 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.199890 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.199907 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.199935 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.199955 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:22Z","lastTransitionTime":"2026-02-24T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.303486 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.303581 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.303603 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.303629 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.303649 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:22Z","lastTransitionTime":"2026-02-24T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.406964 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.407019 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.407034 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.407057 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.407072 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:22Z","lastTransitionTime":"2026-02-24T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.510505 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.510604 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.510653 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.510678 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.510701 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:22Z","lastTransitionTime":"2026-02-24T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.576155 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-nwxht"] Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.576608 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nwxht" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.578752 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.580795 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.581044 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.595394 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.607825 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.613253 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.613313 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.613326 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.613348 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.613363 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:22Z","lastTransitionTime":"2026-02-24T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.617793 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.636248 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.651373 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.664331 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.668178 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxc9g\" (UniqueName: \"kubernetes.io/projected/ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf-kube-api-access-lxc9g\") pod \"node-resolver-nwxht\" (UID: \"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\") " pod="openshift-dns/node-resolver-nwxht" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.668242 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf-hosts-file\") pod \"node-resolver-nwxht\" (UID: \"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\") " pod="openshift-dns/node-resolver-nwxht" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.676295 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.690968 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.693143 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.693541 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:22 crc kubenswrapper[4824]: E0224 00:07:22.693725 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:22 crc kubenswrapper[4824]: E0224 00:07:22.693900 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.694166 4824 scope.go:117] "RemoveContainer" containerID="b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.703829 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:35:05.37190417 +0000 UTC Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.718663 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.718726 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.718738 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.718761 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.718776 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:22Z","lastTransitionTime":"2026-02-24T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.769057 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxc9g\" (UniqueName: \"kubernetes.io/projected/ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf-kube-api-access-lxc9g\") pod \"node-resolver-nwxht\" (UID: \"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\") " pod="openshift-dns/node-resolver-nwxht" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.769146 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf-hosts-file\") pod \"node-resolver-nwxht\" (UID: \"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\") " pod="openshift-dns/node-resolver-nwxht" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.769325 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf-hosts-file\") pod \"node-resolver-nwxht\" (UID: \"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\") " pod="openshift-dns/node-resolver-nwxht" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.797160 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxc9g\" (UniqueName: \"kubernetes.io/projected/ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf-kube-api-access-lxc9g\") pod \"node-resolver-nwxht\" (UID: \"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\") " pod="openshift-dns/node-resolver-nwxht" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.821508 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.821570 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.821584 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.821603 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.821614 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:22Z","lastTransitionTime":"2026-02-24T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.897945 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-nwxht" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.925896 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.925954 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.925973 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.926000 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.926020 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:22Z","lastTransitionTime":"2026-02-24T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.939691 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-vcbgn"] Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.940074 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-wvqfl"] Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.940348 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-wvqfl" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.940577 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.946936 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.946983 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.947168 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.947242 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.947248 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.947352 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.947418 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.947414 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-d64vq"] Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.947507 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.947584 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.947833 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.948638 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.951857 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.953844 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.961953 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.979926 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4824]: I0224 00:07:22.994669 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.013399 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.028123 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.031066 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.031104 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.031117 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.031137 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.031150 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.041047 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.057723 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.069783 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072165 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-multus-socket-dir-parent\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072221 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-host-run-netns\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072263 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-os-release\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072308 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-multus-daemon-config\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072348 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-system-cni-dir\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072373 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttlvz\" (UniqueName: \"kubernetes.io/projected/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-kube-api-access-ttlvz\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072410 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-host-run-multus-certs\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072441 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/939ca085-9383-42e6-b7d6-37f101137273-rootfs\") pod \"machine-config-daemon-vcbgn\" (UID: \"939ca085-9383-42e6-b7d6-37f101137273\") " pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072466 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-os-release\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072558 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-etc-kubernetes\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072582 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-cnibin\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072606 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-host-var-lib-cni-multus\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072637 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-multus-conf-dir\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072661 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/939ca085-9383-42e6-b7d6-37f101137273-mcd-auth-proxy-config\") pod \"machine-config-daemon-vcbgn\" (UID: \"939ca085-9383-42e6-b7d6-37f101137273\") " pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072689 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-multus-cni-dir\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072712 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-system-cni-dir\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072736 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072758 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjnn7\" (UniqueName: \"kubernetes.io/projected/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-kube-api-access-qjnn7\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072783 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6f4s\" (UniqueName: \"kubernetes.io/projected/939ca085-9383-42e6-b7d6-37f101137273-kube-api-access-j6f4s\") pod \"machine-config-daemon-vcbgn\" (UID: \"939ca085-9383-42e6-b7d6-37f101137273\") " pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072809 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/939ca085-9383-42e6-b7d6-37f101137273-proxy-tls\") pod \"machine-config-daemon-vcbgn\" (UID: \"939ca085-9383-42e6-b7d6-37f101137273\") " pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072835 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-hostroot\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072859 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-cni-binary-copy\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072923 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-host-var-lib-cni-bin\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.072976 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-cni-binary-copy\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.073016 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-host-var-lib-kubelet\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.073046 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.073085 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-host-run-k8s-cni-cncf-io\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.073138 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-cnibin\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.083704 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.096168 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.106741 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.121918 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.134284 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.134336 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.134347 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.134366 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.134381 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.138319 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.151382 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nwxht" event={"ID":"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf","Type":"ContainerStarted","Data":"f7a3b89d5ba26394edfe0be92dd9665a1f87335d098f3cc58a29d34b6745d414"} Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.153108 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.153417 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.154994 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99"} Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.155455 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.172108 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174324 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6f4s\" (UniqueName: \"kubernetes.io/projected/939ca085-9383-42e6-b7d6-37f101137273-kube-api-access-j6f4s\") pod \"machine-config-daemon-vcbgn\" (UID: \"939ca085-9383-42e6-b7d6-37f101137273\") " pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174362 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/939ca085-9383-42e6-b7d6-37f101137273-proxy-tls\") pod \"machine-config-daemon-vcbgn\" (UID: \"939ca085-9383-42e6-b7d6-37f101137273\") " pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174382 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-hostroot\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174400 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-cni-binary-copy\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174425 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-cni-binary-copy\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174442 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-host-var-lib-cni-bin\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174460 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-host-run-k8s-cni-cncf-io\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174478 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-host-var-lib-kubelet\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174493 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174509 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-cnibin\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174543 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-os-release\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174561 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-multus-socket-dir-parent\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174580 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-host-run-netns\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174576 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-hostroot\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174595 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-multus-daemon-config\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174743 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-system-cni-dir\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174777 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttlvz\" (UniqueName: \"kubernetes.io/projected/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-kube-api-access-ttlvz\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174807 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-host-run-multus-certs\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174833 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/939ca085-9383-42e6-b7d6-37f101137273-rootfs\") pod \"machine-config-daemon-vcbgn\" (UID: \"939ca085-9383-42e6-b7d6-37f101137273\") " pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174875 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-os-release\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174911 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-etc-kubernetes\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174936 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-cnibin\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174968 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/939ca085-9383-42e6-b7d6-37f101137273-mcd-auth-proxy-config\") pod \"machine-config-daemon-vcbgn\" (UID: \"939ca085-9383-42e6-b7d6-37f101137273\") " pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.174994 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-multus-cni-dir\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175018 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-host-var-lib-cni-multus\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175043 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-multus-conf-dir\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175060 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-system-cni-dir\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175078 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175100 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjnn7\" (UniqueName: \"kubernetes.io/projected/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-kube-api-access-qjnn7\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175203 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-host-var-lib-cni-bin\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175234 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-multus-daemon-config\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175253 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-host-run-k8s-cni-cncf-io\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175282 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-host-var-lib-kubelet\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175290 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-cnibin\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175310 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-etc-kubernetes\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175358 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-os-release\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175396 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-multus-socket-dir-parent\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175401 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-cni-binary-copy\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175439 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-host-run-multus-certs\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175420 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-host-run-netns\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175489 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-system-cni-dir\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175504 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-cnibin\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175534 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-host-var-lib-cni-multus\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175555 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-multus-conf-dir\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175663 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175764 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/939ca085-9383-42e6-b7d6-37f101137273-rootfs\") pod \"machine-config-daemon-vcbgn\" (UID: \"939ca085-9383-42e6-b7d6-37f101137273\") " pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175755 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-system-cni-dir\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175784 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-cni-binary-copy\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175788 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-multus-cni-dir\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.175825 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-os-release\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.176693 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.177184 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/939ca085-9383-42e6-b7d6-37f101137273-mcd-auth-proxy-config\") pod \"machine-config-daemon-vcbgn\" (UID: \"939ca085-9383-42e6-b7d6-37f101137273\") " pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.189390 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.189704 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/939ca085-9383-42e6-b7d6-37f101137273-proxy-tls\") pod \"machine-config-daemon-vcbgn\" (UID: \"939ca085-9383-42e6-b7d6-37f101137273\") " pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.193130 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6f4s\" (UniqueName: \"kubernetes.io/projected/939ca085-9383-42e6-b7d6-37f101137273-kube-api-access-j6f4s\") pod \"machine-config-daemon-vcbgn\" (UID: \"939ca085-9383-42e6-b7d6-37f101137273\") " pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.197439 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjnn7\" (UniqueName: \"kubernetes.io/projected/15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac-kube-api-access-qjnn7\") pod \"multus-wvqfl\" (UID: \"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\") " pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.200489 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.215212 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttlvz\" (UniqueName: \"kubernetes.io/projected/28309e58-76b2-4fe6-a1e5-569b6f0b3a5e-kube-api-access-ttlvz\") pod \"multus-additional-cni-plugins-d64vq\" (UID: \"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\") " pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.215758 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.230880 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.242341 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.242372 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.242380 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.242397 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.242406 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.246393 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.260714 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-wvqfl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.269939 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.274993 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:07:23 crc kubenswrapper[4824]: W0224 00:07:23.275045 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15b9ae43_8f87_4f2f_a8d9_b55c8fa986ac.slice/crio-eed966a1c7d3d7481120f71dd8191ff06e3b39fa4062245f1d7158b419a8aa0c WatchSource:0}: Error finding container eed966a1c7d3d7481120f71dd8191ff06e3b39fa4062245f1d7158b419a8aa0c: Status 404 returned error can't find the container with id eed966a1c7d3d7481120f71dd8191ff06e3b39fa4062245f1d7158b419a8aa0c Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.283993 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-d64vq" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.287366 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: W0224 00:07:23.289191 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod939ca085_9383_42e6_b7d6_37f101137273.slice/crio-345e2c140a4948daeb252c8f8ef30d7622bfb85fa4f8a60e66b19550c294dfb8 WatchSource:0}: Error finding container 345e2c140a4948daeb252c8f8ef30d7622bfb85fa4f8a60e66b19550c294dfb8: Status 404 returned error can't find the container with id 345e2c140a4948daeb252c8f8ef30d7622bfb85fa4f8a60e66b19550c294dfb8 Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.302167 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: W0224 00:07:23.309179 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28309e58_76b2_4fe6_a1e5_569b6f0b3a5e.slice/crio-9d09bc0d27b84e1e64488295d6c681e537e04c706cd059872bb297627c8e16fa WatchSource:0}: Error finding container 9d09bc0d27b84e1e64488295d6c681e537e04c706cd059872bb297627c8e16fa: Status 404 returned error can't find the container with id 9d09bc0d27b84e1e64488295d6c681e537e04c706cd059872bb297627c8e16fa Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.316143 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.333296 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.334361 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4xjg6"] Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.337686 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.347154 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.347383 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.347586 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.347717 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.347900 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.347966 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.348103 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.352815 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.352869 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.352883 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.352907 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.352921 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.353059 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.370954 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.382421 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.396120 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.410224 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.423867 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.437288 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.450951 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.455139 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.455162 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.455171 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.455185 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.455196 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.463365 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.481925 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-log-socket\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.481978 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d985b875-dd5e-4767-a4e2-209894575a8f-ovn-node-metrics-cert\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.482005 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-systemd-units\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.482028 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-run-ovn\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.482053 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6rnj\" (UniqueName: \"kubernetes.io/projected/d985b875-dd5e-4767-a4e2-209894575a8f-kube-api-access-x6rnj\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.482074 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-run-netns\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.482095 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-cni-bin\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.482120 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-run-ovn-kubernetes\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.482142 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d985b875-dd5e-4767-a4e2-209894575a8f-ovnkube-script-lib\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.482164 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-var-lib-openvswitch\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.482205 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-kubelet\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.482226 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-slash\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.482249 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-run-systemd\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.482268 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-etc-openvswitch\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.482288 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-cni-netd\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.482309 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.482334 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d985b875-dd5e-4767-a4e2-209894575a8f-ovnkube-config\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.482354 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d985b875-dd5e-4767-a4e2-209894575a8f-env-overrides\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.482384 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-node-log\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.482403 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-run-openvswitch\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.516715 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.544215 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.558171 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.558483 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.558589 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.558689 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.558777 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.568724 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.582281 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.582853 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-var-lib-openvswitch\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.582900 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-slash\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.582947 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-run-systemd\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.582964 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-var-lib-openvswitch\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583002 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-kubelet\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583020 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-run-systemd\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583026 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-etc-openvswitch\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583046 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-kubelet\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583049 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-cni-netd\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583070 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-etc-openvswitch\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583075 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583096 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-cni-netd\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583097 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583069 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-slash\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583098 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d985b875-dd5e-4767-a4e2-209894575a8f-ovnkube-config\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583357 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d985b875-dd5e-4767-a4e2-209894575a8f-env-overrides\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583409 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-node-log\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583478 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-run-openvswitch\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583557 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-node-log\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583561 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-log-socket\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583613 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-log-socket\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583625 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-run-openvswitch\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583613 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d985b875-dd5e-4767-a4e2-209894575a8f-ovn-node-metrics-cert\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583681 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-systemd-units\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583704 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-run-ovn\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583733 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6rnj\" (UniqueName: \"kubernetes.io/projected/d985b875-dd5e-4767-a4e2-209894575a8f-kube-api-access-x6rnj\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583740 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-systemd-units\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583758 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-run-netns\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583787 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-run-netns\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583798 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-cni-bin\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583825 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-cni-bin\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583824 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d985b875-dd5e-4767-a4e2-209894575a8f-ovnkube-config\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583839 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-run-ovn-kubernetes\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583922 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d985b875-dd5e-4767-a4e2-209894575a8f-ovnkube-script-lib\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.583868 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-run-ovn-kubernetes\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.584018 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-run-ovn\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.585050 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d985b875-dd5e-4767-a4e2-209894575a8f-env-overrides\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.589137 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d985b875-dd5e-4767-a4e2-209894575a8f-ovnkube-script-lib\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.595291 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d985b875-dd5e-4767-a4e2-209894575a8f-ovn-node-metrics-cert\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.602861 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.605830 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6rnj\" (UniqueName: \"kubernetes.io/projected/d985b875-dd5e-4767-a4e2-209894575a8f-kube-api-access-x6rnj\") pod \"ovnkube-node-4xjg6\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.616441 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.632208 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.650167 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.662017 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.662042 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.662050 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.662066 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.662077 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.672787 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.676752 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.689015 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.693208 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:23 crc kubenswrapper[4824]: E0224 00:07:23.693359 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.704367 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 01:40:43.703549142 +0000 UTC Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.764566 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.764612 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.764621 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.764639 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.764651 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.867963 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.868023 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.868035 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.868053 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.868067 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.974987 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.975031 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.975042 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.975080 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4824]: I0224 00:07:23.975097 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.082598 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.082649 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.082660 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.082677 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.082691 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:24Z","lastTransitionTime":"2026-02-24T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.161082 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-nwxht" event={"ID":"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf","Type":"ContainerStarted","Data":"69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.162371 4824 generic.go:334] "Generic (PLEG): container finished" podID="d985b875-dd5e-4767-a4e2-209894575a8f" containerID="1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d" exitCode=0 Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.162431 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerDied","Data":"1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.162458 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerStarted","Data":"39c21b24d26f0ce7cc1f64fcb5e9960f6a2487988e095495d5e73beb90c5e099"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.164122 4824 generic.go:334] "Generic (PLEG): container finished" podID="28309e58-76b2-4fe6-a1e5-569b6f0b3a5e" containerID="002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0" exitCode=0 Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.164157 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" event={"ID":"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e","Type":"ContainerDied","Data":"002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.164215 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" event={"ID":"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e","Type":"ContainerStarted","Data":"9d09bc0d27b84e1e64488295d6c681e537e04c706cd059872bb297627c8e16fa"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.166387 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerStarted","Data":"c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.166448 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerStarted","Data":"13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.166469 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerStarted","Data":"345e2c140a4948daeb252c8f8ef30d7622bfb85fa4f8a60e66b19550c294dfb8"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.168486 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wvqfl" event={"ID":"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac","Type":"ContainerStarted","Data":"4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.168559 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wvqfl" event={"ID":"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac","Type":"ContainerStarted","Data":"eed966a1c7d3d7481120f71dd8191ff06e3b39fa4062245f1d7158b419a8aa0c"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.181260 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.187902 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.187956 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.187966 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.187983 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.187993 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:24Z","lastTransitionTime":"2026-02-24T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.197700 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.216084 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.230263 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.248604 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.262611 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.275441 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.289319 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.290898 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.290932 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.290945 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.290963 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.290976 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:24Z","lastTransitionTime":"2026-02-24T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.305745 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.331814 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.353718 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.375378 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.392202 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.394029 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.394087 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.394100 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.394119 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.394131 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:24Z","lastTransitionTime":"2026-02-24T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.417569 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.440531 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.461482 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.478224 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.491049 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.496130 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.496158 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.496167 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.496185 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.496199 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:24Z","lastTransitionTime":"2026-02-24T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.511081 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.532176 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.547193 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.564732 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.578881 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.599019 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.599072 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.599085 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.599106 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.599122 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:24Z","lastTransitionTime":"2026-02-24T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.602031 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.693696 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.693696 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:24 crc kubenswrapper[4824]: E0224 00:07:24.693870 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:24 crc kubenswrapper[4824]: E0224 00:07:24.693923 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.701093 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.701140 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.701150 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.701168 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.701181 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:24Z","lastTransitionTime":"2026-02-24T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.705264 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 14:53:50.522666521 +0000 UTC Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.804002 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.804041 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.804052 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.804072 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.804083 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:24Z","lastTransitionTime":"2026-02-24T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.912239 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.912285 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.912297 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.912316 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:24 crc kubenswrapper[4824]: I0224 00:07:24.912328 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:24Z","lastTransitionTime":"2026-02-24T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.015153 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.015286 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.015317 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.015351 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.015374 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.119171 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.119265 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.119289 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.119325 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.119353 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.176373 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerStarted","Data":"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7"} Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.176443 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerStarted","Data":"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04"} Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.176469 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerStarted","Data":"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2"} Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.176490 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerStarted","Data":"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8"} Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.176510 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerStarted","Data":"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44"} Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.176555 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerStarted","Data":"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91"} Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.182257 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" event={"ID":"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e","Type":"ContainerStarted","Data":"3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80"} Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.203693 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.222773 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.222818 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.222834 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.222851 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.222864 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.235739 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.261655 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.290931 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.314183 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.329564 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.338717 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.338789 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.338806 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.338824 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.338834 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.350629 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.373374 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.392330 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.411741 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.425886 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.439847 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.442581 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.442619 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.442628 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.442644 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.442657 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.545023 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.545068 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.545078 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.545095 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.545107 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.657436 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.657545 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.657574 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.657606 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.657660 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.692930 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:25 crc kubenswrapper[4824]: E0224 00:07:25.693132 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.706040 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:16:17.964738619 +0000 UTC Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.767767 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.767818 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.767834 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.767859 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.767877 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.871471 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.871564 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.871584 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.871610 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.871627 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.974681 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.974770 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.974789 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.974817 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4824]: I0224 00:07:25.974839 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.079569 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.079894 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.079905 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.079922 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.079933 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.183684 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.183737 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.183747 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.183766 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.183779 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.187251 4824 generic.go:334] "Generic (PLEG): container finished" podID="28309e58-76b2-4fe6-a1e5-569b6f0b3a5e" containerID="3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80" exitCode=0 Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.187315 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" event={"ID":"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e","Type":"ContainerDied","Data":"3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80"} Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.209004 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.227728 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.242087 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.263675 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.284497 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.287470 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.287504 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.287517 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.287649 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.287663 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.298966 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.312804 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.322382 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.334573 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.348049 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.362062 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.381907 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.390622 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.390661 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.390671 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.390689 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.390698 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.493723 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.494139 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.494151 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.494169 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.494180 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.597279 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.597316 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.597324 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.597338 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.597350 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.693838 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.693838 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:26 crc kubenswrapper[4824]: E0224 00:07:26.694040 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:26 crc kubenswrapper[4824]: E0224 00:07:26.694181 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.701994 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.702050 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.702067 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.702090 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.702109 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.706261 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:28:14.590885794 +0000 UTC Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.713036 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.733194 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.754901 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.785069 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.809360 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.809464 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.809483 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.809510 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.809562 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.809928 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.825176 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.838645 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.855300 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.869647 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.882810 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.897100 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.910981 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.912238 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.912271 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.912280 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.912299 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4824]: I0224 00:07:26.912309 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.015101 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.015131 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.015139 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.015156 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.015166 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.117920 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.117967 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.117985 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.118011 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.118028 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.197382 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerStarted","Data":"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430"} Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.203017 4824 generic.go:334] "Generic (PLEG): container finished" podID="28309e58-76b2-4fe6-a1e5-569b6f0b3a5e" containerID="e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154" exitCode=0 Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.203094 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" event={"ID":"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e","Type":"ContainerDied","Data":"e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154"} Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.216772 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.221548 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.221596 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.221607 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.221644 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.221657 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.230081 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.244938 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.260328 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.274013 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.289180 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.310062 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.324624 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.324676 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.324688 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.324708 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.324720 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.325555 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.336322 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.351402 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.370042 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.385552 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.427573 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.427618 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.427631 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.427650 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.427665 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.530461 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.530513 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.530542 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.530563 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.530576 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.634629 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.635067 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.635093 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.635127 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.635151 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.693353 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:27 crc kubenswrapper[4824]: E0224 00:07:27.693641 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.706686 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 03:04:56.561678482 +0000 UTC Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.738831 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.738887 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.738908 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.738933 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.738952 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.845566 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.845609 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.845619 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.845635 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.845644 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.948084 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.948122 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.948130 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.948145 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4824]: I0224 00:07:27.948154 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.051172 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.051205 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.051214 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.051229 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.051238 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.154101 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.154135 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.154142 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.154157 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.154166 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.217902 4824 generic.go:334] "Generic (PLEG): container finished" podID="28309e58-76b2-4fe6-a1e5-569b6f0b3a5e" containerID="6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50" exitCode=0 Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.217957 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" event={"ID":"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e","Type":"ContainerDied","Data":"6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50"} Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.250782 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.256129 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.256162 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.256171 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.256186 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.256197 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.288996 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.320068 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.335241 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.348632 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.363512 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.363563 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.363577 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.363596 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.363609 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.366577 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.380828 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.392432 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.407073 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.423376 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.439940 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.459620 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.465505 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.465557 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.465567 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.465583 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.465593 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.569797 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.569874 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.569892 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.569938 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.569952 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.673063 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.673130 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.673145 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.673170 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.673190 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.693498 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.693498 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:28 crc kubenswrapper[4824]: E0224 00:07:28.693711 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:28 crc kubenswrapper[4824]: E0224 00:07:28.693771 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.706881 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 19:05:59.065908415 +0000 UTC Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.776094 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.776159 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.776170 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.776194 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.776211 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.879808 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.879867 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.879881 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.879902 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.879915 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.983361 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.983458 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.983482 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.983509 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4824]: I0224 00:07:28.983570 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.089892 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.090306 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.090318 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.090338 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.090351 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:29Z","lastTransitionTime":"2026-02-24T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.192663 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.192711 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.192726 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.192746 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.192763 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:29Z","lastTransitionTime":"2026-02-24T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.228173 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerStarted","Data":"9a2a925fe48fd97ac70a501bb5e22a03c571cd645c65d28ba6ce3318d93c26c5"} Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.229174 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.229298 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.229742 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.237203 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" event={"ID":"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e","Type":"ContainerStarted","Data":"43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268"} Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.251388 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.261790 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.264495 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.273668 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.294865 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.295830 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.295873 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.295887 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.295907 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.295920 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:29Z","lastTransitionTime":"2026-02-24T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.316822 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.335245 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.352541 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.374842 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.397952 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.398009 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.398026 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.398051 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.398068 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:29Z","lastTransitionTime":"2026-02-24T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.402776 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a2a925fe48fd97ac70a501bb5e22a03c571cd645c65d28ba6ce3318d93c26c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.424620 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.438375 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.456675 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.473504 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.489445 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.502229 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.502299 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.502315 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.502341 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.502357 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:29Z","lastTransitionTime":"2026-02-24T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.505535 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.523172 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.545891 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a2a925fe48fd97ac70a501bb5e22a03c571cd645c65d28ba6ce3318d93c26c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.561163 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.579009 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.596404 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.604680 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.604740 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.604755 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.604781 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.604797 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:29Z","lastTransitionTime":"2026-02-24T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.615254 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.630980 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.647811 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.663256 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.679188 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.693474 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:29 crc kubenswrapper[4824]: E0224 00:07:29.693696 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.707241 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.707277 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.707289 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.707307 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.707319 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:29Z","lastTransitionTime":"2026-02-24T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.713917 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:34:01.214503728 +0000 UTC Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.782023 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-2zsq6"] Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.782538 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2zsq6" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.787076 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.787473 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.788004 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.788592 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.803704 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.810008 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.810039 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.810050 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.810068 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.810081 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:29Z","lastTransitionTime":"2026-02-24T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.820147 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.838133 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.869928 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a2a925fe48fd97ac70a501bb5e22a03c571cd645c65d28ba6ce3318d93c26c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.889935 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.917922 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.917965 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.917975 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.917996 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.918006 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:29Z","lastTransitionTime":"2026-02-24T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.921488 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.940014 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.954802 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.957427 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bffd69c2-56a8-4fa0-9fbf-82a508f80ec1-host\") pod \"node-ca-2zsq6\" (UID: \"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\") " pod="openshift-image-registry/node-ca-2zsq6" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.957502 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bffd69c2-56a8-4fa0-9fbf-82a508f80ec1-serviceca\") pod \"node-ca-2zsq6\" (UID: \"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\") " pod="openshift-image-registry/node-ca-2zsq6" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.957619 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkj25\" (UniqueName: \"kubernetes.io/projected/bffd69c2-56a8-4fa0-9fbf-82a508f80ec1-kube-api-access-dkj25\") pod \"node-ca-2zsq6\" (UID: \"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\") " pod="openshift-image-registry/node-ca-2zsq6" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.968343 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.983844 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4824]: I0224 00:07:29.997154 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.010958 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.022457 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.022505 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.022530 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.022554 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.022564 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.026471 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.058636 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bffd69c2-56a8-4fa0-9fbf-82a508f80ec1-host\") pod \"node-ca-2zsq6\" (UID: \"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\") " pod="openshift-image-registry/node-ca-2zsq6" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.058702 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bffd69c2-56a8-4fa0-9fbf-82a508f80ec1-serviceca\") pod \"node-ca-2zsq6\" (UID: \"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\") " pod="openshift-image-registry/node-ca-2zsq6" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.058726 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkj25\" (UniqueName: \"kubernetes.io/projected/bffd69c2-56a8-4fa0-9fbf-82a508f80ec1-kube-api-access-dkj25\") pod \"node-ca-2zsq6\" (UID: \"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\") " pod="openshift-image-registry/node-ca-2zsq6" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.058796 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bffd69c2-56a8-4fa0-9fbf-82a508f80ec1-host\") pod \"node-ca-2zsq6\" (UID: \"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\") " pod="openshift-image-registry/node-ca-2zsq6" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.059874 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bffd69c2-56a8-4fa0-9fbf-82a508f80ec1-serviceca\") pod \"node-ca-2zsq6\" (UID: \"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\") " pod="openshift-image-registry/node-ca-2zsq6" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.076883 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkj25\" (UniqueName: \"kubernetes.io/projected/bffd69c2-56a8-4fa0-9fbf-82a508f80ec1-kube-api-access-dkj25\") pod \"node-ca-2zsq6\" (UID: \"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\") " pod="openshift-image-registry/node-ca-2zsq6" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.100604 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-2zsq6" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.124210 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.124289 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.124302 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.124323 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.124335 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.228323 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.228387 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.228414 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.228437 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.228453 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.243123 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2zsq6" event={"ID":"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1","Type":"ContainerStarted","Data":"8ac7a616142a433b8c356f5835c547c6346851e18c553b7653b7003e821c9a50"} Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.332653 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.332906 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.332916 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.332933 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.332944 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.435576 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.435622 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.435634 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.435653 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.435666 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.539887 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.539921 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.539930 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.539946 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.539956 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.644368 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.644421 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.644435 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.644453 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.644465 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.693801 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.693899 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:30 crc kubenswrapper[4824]: E0224 00:07:30.694003 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:30 crc kubenswrapper[4824]: E0224 00:07:30.694622 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.714109 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:04:29.170569902 +0000 UTC Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.755023 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.755094 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.755116 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.755149 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.755173 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.859377 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.859450 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.859469 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.859498 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.859650 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.962132 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.962183 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.962194 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.962212 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4824]: I0224 00:07:30.962225 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.065840 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.065894 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.065917 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.065937 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.065952 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.168938 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.168975 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.168984 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.168998 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.169008 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.222011 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.222052 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.222059 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.222078 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.222088 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4824]: E0224 00:07:31.240312 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.245709 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.245751 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.245764 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.245784 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.245801 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.247671 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-2zsq6" event={"ID":"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1","Type":"ContainerStarted","Data":"4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd"} Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.257253 4824 generic.go:334] "Generic (PLEG): container finished" podID="28309e58-76b2-4fe6-a1e5-569b6f0b3a5e" containerID="43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268" exitCode=0 Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.257363 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" event={"ID":"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e","Type":"ContainerDied","Data":"43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268"} Feb 24 00:07:31 crc kubenswrapper[4824]: E0224 00:07:31.271486 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.278723 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.282360 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.282419 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.282439 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.282470 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.282490 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.297453 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: E0224 00:07:31.301377 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.306312 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.306346 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.306356 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.306374 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.306385 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.312269 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: E0224 00:07:31.323396 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.328255 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.328301 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.328317 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.328341 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.328357 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.334651 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: E0224 00:07:31.346712 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: E0224 00:07:31.346879 4824 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.348623 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.348653 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.348664 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.348682 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.348694 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.355562 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a2a925fe48fd97ac70a501bb5e22a03c571cd645c65d28ba6ce3318d93c26c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.371668 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.387653 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.402176 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.418181 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.430468 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.444585 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.451606 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.451647 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.451660 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.451682 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.451697 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.457572 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.471701 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.487605 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.504831 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.527042 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.542197 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.555056 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.555102 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.555115 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.555139 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.555152 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.558216 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.572076 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.590917 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a2a925fe48fd97ac70a501bb5e22a03c571cd645c65d28ba6ce3318d93c26c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.603179 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.611658 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.621881 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.635329 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.647707 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.659053 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.659102 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.659117 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.659408 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.659429 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.673563 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.693054 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:31 crc kubenswrapper[4824]: E0224 00:07:31.693201 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.714817 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:42:05.968491104 +0000 UTC Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.762731 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.762772 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.762784 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.762802 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.762816 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.866424 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.866838 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.867076 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.867171 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.867255 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.976994 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.977061 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.977079 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.977108 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4824]: I0224 00:07:31.977126 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.084810 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.084889 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.084907 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.084935 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.084954 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:32Z","lastTransitionTime":"2026-02-24T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.188542 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.188597 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.188609 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.188631 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.188644 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:32Z","lastTransitionTime":"2026-02-24T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.273586 4824 generic.go:334] "Generic (PLEG): container finished" podID="28309e58-76b2-4fe6-a1e5-569b6f0b3a5e" containerID="86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45" exitCode=0 Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.273645 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" event={"ID":"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e","Type":"ContainerDied","Data":"86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45"} Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.292259 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.292587 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.292682 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.292777 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.292866 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:32Z","lastTransitionTime":"2026-02-24T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.300017 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.325272 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.350578 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.371759 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a2a925fe48fd97ac70a501bb5e22a03c571cd645c65d28ba6ce3318d93c26c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.389915 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.395736 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.395765 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.395773 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.395789 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.395801 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:32Z","lastTransitionTime":"2026-02-24T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.403343 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.416609 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.433667 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.444380 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.454922 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.467948 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.482277 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.496133 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.498497 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.499037 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.499055 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.499072 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.499084 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:32Z","lastTransitionTime":"2026-02-24T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.616230 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.616340 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.616376 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.616409 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.616442 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:32 crc kubenswrapper[4824]: E0224 00:07:32.616557 4824 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:07:32 crc kubenswrapper[4824]: E0224 00:07:32.616584 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:04.616510799 +0000 UTC m=+148.606135308 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:07:32 crc kubenswrapper[4824]: E0224 00:07:32.616644 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:08:04.616624322 +0000 UTC m=+148.606248821 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:07:32 crc kubenswrapper[4824]: E0224 00:07:32.616850 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:07:32 crc kubenswrapper[4824]: E0224 00:07:32.616876 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:07:32 crc kubenswrapper[4824]: E0224 00:07:32.616888 4824 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:32 crc kubenswrapper[4824]: E0224 00:07:32.616939 4824 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:07:32 crc kubenswrapper[4824]: E0224 00:07:32.616942 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 00:08:04.616924499 +0000 UTC m=+148.606549158 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:32 crc kubenswrapper[4824]: E0224 00:07:32.617033 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:08:04.617017772 +0000 UTC m=+148.606642281 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:07:32 crc kubenswrapper[4824]: E0224 00:07:32.617026 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:07:32 crc kubenswrapper[4824]: E0224 00:07:32.617070 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:07:32 crc kubenswrapper[4824]: E0224 00:07:32.617085 4824 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:32 crc kubenswrapper[4824]: E0224 00:07:32.617152 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 00:08:04.617130545 +0000 UTC m=+148.606755014 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.618320 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.618352 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.618364 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.618384 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.618399 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:32Z","lastTransitionTime":"2026-02-24T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.694761 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:32 crc kubenswrapper[4824]: E0224 00:07:32.694962 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.695411 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:32 crc kubenswrapper[4824]: E0224 00:07:32.695556 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.716828 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 23:48:45.793589389 +0000 UTC Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.721771 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.721834 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.721853 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.721878 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.721896 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:32Z","lastTransitionTime":"2026-02-24T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.825110 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.825163 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.825180 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.825204 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.825218 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:32Z","lastTransitionTime":"2026-02-24T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.928464 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.928513 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.928564 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.928584 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:32 crc kubenswrapper[4824]: I0224 00:07:32.928595 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:32Z","lastTransitionTime":"2026-02-24T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.034028 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.034112 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.034142 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.034219 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.034248 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.137612 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.137658 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.137670 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.137690 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.137703 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.240386 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.240432 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.240443 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.240465 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.240477 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.287342 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" event={"ID":"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e","Type":"ContainerStarted","Data":"1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e"} Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.291636 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovnkube-controller/0.log" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.293510 4824 generic.go:334] "Generic (PLEG): container finished" podID="d985b875-dd5e-4767-a4e2-209894575a8f" containerID="9a2a925fe48fd97ac70a501bb5e22a03c571cd645c65d28ba6ce3318d93c26c5" exitCode=1 Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.293546 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerDied","Data":"9a2a925fe48fd97ac70a501bb5e22a03c571cd645c65d28ba6ce3318d93c26c5"} Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.294260 4824 scope.go:117] "RemoveContainer" containerID="9a2a925fe48fd97ac70a501bb5e22a03c571cd645c65d28ba6ce3318d93c26c5" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.302738 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.321118 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.336772 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.342833 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.343003 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.343251 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.343390 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.343504 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.353320 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.373848 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.400731 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a2a925fe48fd97ac70a501bb5e22a03c571cd645c65d28ba6ce3318d93c26c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.416562 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.428860 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.443103 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.447785 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.447824 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.447836 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.447857 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.447873 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.460662 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.478745 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.495950 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.517315 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.546472 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.557134 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.557180 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.557192 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.557213 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.557225 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.565020 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.578720 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.592126 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.612647 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a2a925fe48fd97ac70a501bb5e22a03c571cd645c65d28ba6ce3318d93c26c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2a925fe48fd97ac70a501bb5e22a03c571cd645c65d28ba6ce3318d93c26c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"message\\\":\\\"7:33.236354 6496 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:33.236725 6496 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:33.236821 6496 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:33.237004 6496 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:33.237441 6496 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 00:07:33.237802 6496 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 00:07:33.237929 6496 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.625268 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.637377 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.649476 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.660397 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.660447 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.660460 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.660484 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.660496 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.668679 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.681106 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.692768 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:33 crc kubenswrapper[4824]: E0224 00:07:33.692945 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.704931 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.717409 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:32:09.649442555 +0000 UTC Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.746888 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.763809 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.763855 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.763863 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.763880 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.763892 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.774343 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.865869 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.865909 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.865921 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.865942 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.865953 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.969338 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.969391 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.969403 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.969421 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4824]: I0224 00:07:33.969434 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.073586 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.073651 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.073673 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.073702 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.073726 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.177694 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.177760 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.177789 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.177817 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.177836 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.280430 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.280484 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.280498 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.280534 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.280551 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.306751 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovnkube-controller/0.log" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.310064 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerStarted","Data":"9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952"} Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.310968 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.329355 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.346320 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.367038 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.383429 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.383469 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.383481 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.383499 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.383512 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.383617 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.406365 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2a925fe48fd97ac70a501bb5e22a03c571cd645c65d28ba6ce3318d93c26c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"message\\\":\\\"7:33.236354 6496 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:33.236725 6496 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:33.236821 6496 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:33.237004 6496 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:33.237441 6496 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 00:07:33.237802 6496 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 00:07:33.237929 6496 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.420426 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.431658 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.444257 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.460903 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.473747 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.486948 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.487017 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.487037 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.487064 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.487082 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.489596 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.505395 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.527588 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.590600 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.590642 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.590651 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.590670 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.590682 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.692749 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.692863 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:34 crc kubenswrapper[4824]: E0224 00:07:34.692923 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:34 crc kubenswrapper[4824]: E0224 00:07:34.693115 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.693994 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.694038 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.694051 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.694071 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.694086 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.718353 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:15:00.955625561 +0000 UTC Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.797066 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.797104 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.797113 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.797130 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.797140 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.900048 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.900106 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.900119 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.900144 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4824]: I0224 00:07:34.900159 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.003420 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.003471 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.003483 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.003504 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.003557 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.107339 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.107398 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.107415 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.107441 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.107462 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.210172 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.210231 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.210243 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.210265 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.210278 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.313030 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.313171 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.313191 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.313227 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.313247 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.317127 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovnkube-controller/1.log" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.317962 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovnkube-controller/0.log" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.322081 4824 generic.go:334] "Generic (PLEG): container finished" podID="d985b875-dd5e-4767-a4e2-209894575a8f" containerID="9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952" exitCode=1 Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.322133 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerDied","Data":"9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952"} Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.322188 4824 scope.go:117] "RemoveContainer" containerID="9a2a925fe48fd97ac70a501bb5e22a03c571cd645c65d28ba6ce3318d93c26c5" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.323483 4824 scope.go:117] "RemoveContainer" containerID="9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952" Feb 24 00:07:35 crc kubenswrapper[4824]: E0224 00:07:35.323820 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.339327 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.359195 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.386388 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2a925fe48fd97ac70a501bb5e22a03c571cd645c65d28ba6ce3318d93c26c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"message\\\":\\\"7:33.236354 6496 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:33.236725 6496 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:33.236821 6496 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:33.237004 6496 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:33.237441 6496 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 00:07:33.237802 6496 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 00:07:33.237929 6496 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:34Z\\\",\\\"message\\\":\\\"00:07:34.252477 6701 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 00:07:34.258107 6701 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 00:07:34.258158 6701 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 00:07:34.258177 6701 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 00:07:34.258183 6701 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 00:07:34.258224 6701 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 00:07:34.258229 6701 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 00:07:34.258261 6701 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 00:07:34.258290 6701 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 00:07:34.258298 6701 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 00:07:34.258306 6701 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 00:07:34.258312 6701 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 00:07:34.258320 6701 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 00:07:34.260056 6701 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 00:07:34.260124 6701 factory.go:656] Stopping watch factory\\\\nI0224 00:07:34.260149 6701 ovnkube.go:599] Stopped ovnkube\\\\nI0224 00:07:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.404203 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.415756 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.415800 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.415810 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.415829 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.415840 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.419479 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.435551 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.452093 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.465347 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.481505 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.495277 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.511593 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.518729 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.518790 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.518807 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.518834 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.518852 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.527619 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.546967 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.622511 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.622574 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.622584 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.622603 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.622614 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.692934 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:35 crc kubenswrapper[4824]: E0224 00:07:35.693091 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.719641 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:51:28.213542899 +0000 UTC Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.726193 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.726235 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.726249 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.726270 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.726282 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.830624 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.830672 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.830684 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.830702 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.830712 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.835046 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh"] Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.836003 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.839973 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.840401 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.856752 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.857238 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0525cd89-44e0-47f1-856c-f566eb21596a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dbxhh\" (UID: \"0525cd89-44e0-47f1-856c-f566eb21596a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.857303 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0525cd89-44e0-47f1-856c-f566eb21596a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dbxhh\" (UID: \"0525cd89-44e0-47f1-856c-f566eb21596a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.857368 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0525cd89-44e0-47f1-856c-f566eb21596a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dbxhh\" (UID: \"0525cd89-44e0-47f1-856c-f566eb21596a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.857435 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5smpb\" (UniqueName: \"kubernetes.io/projected/0525cd89-44e0-47f1-856c-f566eb21596a-kube-api-access-5smpb\") pod \"ovnkube-control-plane-749d76644c-dbxhh\" (UID: \"0525cd89-44e0-47f1-856c-f566eb21596a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.872160 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.885711 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.902083 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.919928 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.934483 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.934573 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.934591 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.934619 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.934640 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.936924 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.954449 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.958534 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0525cd89-44e0-47f1-856c-f566eb21596a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dbxhh\" (UID: \"0525cd89-44e0-47f1-856c-f566eb21596a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.958588 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0525cd89-44e0-47f1-856c-f566eb21596a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dbxhh\" (UID: \"0525cd89-44e0-47f1-856c-f566eb21596a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.958612 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0525cd89-44e0-47f1-856c-f566eb21596a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dbxhh\" (UID: \"0525cd89-44e0-47f1-856c-f566eb21596a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.958638 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5smpb\" (UniqueName: \"kubernetes.io/projected/0525cd89-44e0-47f1-856c-f566eb21596a-kube-api-access-5smpb\") pod \"ovnkube-control-plane-749d76644c-dbxhh\" (UID: \"0525cd89-44e0-47f1-856c-f566eb21596a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.959413 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0525cd89-44e0-47f1-856c-f566eb21596a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dbxhh\" (UID: \"0525cd89-44e0-47f1-856c-f566eb21596a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.959415 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0525cd89-44e0-47f1-856c-f566eb21596a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dbxhh\" (UID: \"0525cd89-44e0-47f1-856c-f566eb21596a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.965005 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0525cd89-44e0-47f1-856c-f566eb21596a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dbxhh\" (UID: \"0525cd89-44e0-47f1-856c-f566eb21596a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.975354 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a2a925fe48fd97ac70a501bb5e22a03c571cd645c65d28ba6ce3318d93c26c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"message\\\":\\\"7:33.236354 6496 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:33.236725 6496 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:33.236821 6496 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:33.237004 6496 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:33.237441 6496 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 00:07:33.237802 6496 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0224 00:07:33.237929 6496 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:34Z\\\",\\\"message\\\":\\\"00:07:34.252477 6701 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 00:07:34.258107 6701 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 00:07:34.258158 6701 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 00:07:34.258177 6701 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 00:07:34.258183 6701 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 00:07:34.258224 6701 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 00:07:34.258229 6701 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 00:07:34.258261 6701 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 00:07:34.258290 6701 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 00:07:34.258298 6701 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 00:07:34.258306 6701 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 00:07:34.258312 6701 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 00:07:34.258320 6701 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 00:07:34.260056 6701 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 00:07:34.260124 6701 factory.go:656] Stopping watch factory\\\\nI0224 00:07:34.260149 6701 ovnkube.go:599] Stopped ovnkube\\\\nI0224 00:07:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.979361 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5smpb\" (UniqueName: \"kubernetes.io/projected/0525cd89-44e0-47f1-856c-f566eb21596a-kube-api-access-5smpb\") pod \"ovnkube-control-plane-749d76644c-dbxhh\" (UID: \"0525cd89-44e0-47f1-856c-f566eb21596a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" Feb 24 00:07:35 crc kubenswrapper[4824]: I0224 00:07:35.990412 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.001894 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.014499 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.030871 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.038109 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.038147 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.038157 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.038174 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.038184 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.043162 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.053426 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0525cd89-44e0-47f1-856c-f566eb21596a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbxhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.140483 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.140545 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.140557 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.140574 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.140584 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.148820 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" Feb 24 00:07:36 crc kubenswrapper[4824]: W0224 00:07:36.162598 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0525cd89_44e0_47f1_856c_f566eb21596a.slice/crio-315b104f0a21ba0ac891b8dceb73312e4e9d79eab578a71f2940b8799eee58e3 WatchSource:0}: Error finding container 315b104f0a21ba0ac891b8dceb73312e4e9d79eab578a71f2940b8799eee58e3: Status 404 returned error can't find the container with id 315b104f0a21ba0ac891b8dceb73312e4e9d79eab578a71f2940b8799eee58e3 Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.243099 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.243143 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.243153 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.243170 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.243182 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.329150 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovnkube-controller/1.log" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.333680 4824 scope.go:117] "RemoveContainer" containerID="9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.333738 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" event={"ID":"0525cd89-44e0-47f1-856c-f566eb21596a","Type":"ContainerStarted","Data":"315b104f0a21ba0ac891b8dceb73312e4e9d79eab578a71f2940b8799eee58e3"} Feb 24 00:07:36 crc kubenswrapper[4824]: E0224 00:07:36.333943 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.345652 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.345693 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.345703 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.345719 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.345730 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.349442 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.362908 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.376146 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.394132 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:34Z\\\",\\\"message\\\":\\\"00:07:34.252477 6701 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 00:07:34.258107 6701 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 00:07:34.258158 6701 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 00:07:34.258177 6701 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 00:07:34.258183 6701 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 00:07:34.258224 6701 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 00:07:34.258229 6701 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 00:07:34.258261 6701 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 00:07:34.258290 6701 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 00:07:34.258298 6701 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 00:07:34.258306 6701 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 00:07:34.258312 6701 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 00:07:34.258320 6701 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 00:07:34.260056 6701 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 00:07:34.260124 6701 factory.go:656] Stopping watch factory\\\\nI0224 00:07:34.260149 6701 ovnkube.go:599] Stopped ovnkube\\\\nI0224 00:07:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.408468 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.418284 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.428740 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.442282 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.448758 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.448807 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.448822 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.448842 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.448855 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.451893 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.462894 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0525cd89-44e0-47f1-856c-f566eb21596a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbxhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.474132 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.486284 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.505806 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.520595 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.551816 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.551860 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.551868 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.551884 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.551894 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.581248 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-98z42"] Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.581860 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:36 crc kubenswrapper[4824]: E0224 00:07:36.581925 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.596508 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.608736 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.619577 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.630763 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.650346 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:34Z\\\",\\\"message\\\":\\\"00:07:34.252477 6701 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 00:07:34.258107 6701 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 00:07:34.258158 6701 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 00:07:34.258177 6701 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 00:07:34.258183 6701 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 00:07:34.258224 6701 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 00:07:34.258229 6701 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 00:07:34.258261 6701 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 00:07:34.258290 6701 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 00:07:34.258298 6701 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 00:07:34.258306 6701 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 00:07:34.258312 6701 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 00:07:34.258320 6701 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 00:07:34.260056 6701 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 00:07:34.260124 6701 factory.go:656] Stopping watch factory\\\\nI0224 00:07:34.260149 6701 ovnkube.go:599] Stopped ovnkube\\\\nI0224 00:07:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: E0224 00:07:36.652384 4824 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.662631 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0525cd89-44e0-47f1-856c-f566eb21596a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbxhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.666819 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svh6v\" (UniqueName: \"kubernetes.io/projected/a648113f-3e46-4170-ba30-7155fefbb413-kube-api-access-svh6v\") pod \"network-metrics-daemon-98z42\" (UID: \"a648113f-3e46-4170-ba30-7155fefbb413\") " pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.666927 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs\") pod \"network-metrics-daemon-98z42\" (UID: \"a648113f-3e46-4170-ba30-7155fefbb413\") " pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.673275 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.686709 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.693109 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.693200 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:36 crc kubenswrapper[4824]: E0224 00:07:36.693245 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:36 crc kubenswrapper[4824]: E0224 00:07:36.693449 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.699692 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.716757 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.719886 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 05:55:19.707900619 +0000 UTC Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.728133 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.740374 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.755755 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.767405 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs\") pod \"network-metrics-daemon-98z42\" (UID: \"a648113f-3e46-4170-ba30-7155fefbb413\") " pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.767454 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svh6v\" (UniqueName: \"kubernetes.io/projected/a648113f-3e46-4170-ba30-7155fefbb413-kube-api-access-svh6v\") pod \"network-metrics-daemon-98z42\" (UID: \"a648113f-3e46-4170-ba30-7155fefbb413\") " pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:36 crc kubenswrapper[4824]: E0224 00:07:36.767895 4824 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:36 crc kubenswrapper[4824]: E0224 00:07:36.767950 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs podName:a648113f-3e46-4170-ba30-7155fefbb413 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:37.267931282 +0000 UTC m=+121.257555761 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs") pod "network-metrics-daemon-98z42" (UID: "a648113f-3e46-4170-ba30-7155fefbb413") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.771206 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: E0224 00:07:36.771629 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.782365 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98z42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a648113f-3e46-4170-ba30-7155fefbb413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98z42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.794652 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.801236 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svh6v\" (UniqueName: \"kubernetes.io/projected/a648113f-3e46-4170-ba30-7155fefbb413-kube-api-access-svh6v\") pod \"network-metrics-daemon-98z42\" (UID: \"a648113f-3e46-4170-ba30-7155fefbb413\") " pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.804780 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.814926 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.830096 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.841087 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.853379 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0525cd89-44e0-47f1-856c-f566eb21596a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbxhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.868849 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.881302 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.893282 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.911771 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98z42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a648113f-3e46-4170-ba30-7155fefbb413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98z42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.930018 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.946474 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.959838 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.974310 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4824]: I0224 00:07:36.993084 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:34Z\\\",\\\"message\\\":\\\"00:07:34.252477 6701 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 00:07:34.258107 6701 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 00:07:34.258158 6701 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 00:07:34.258177 6701 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 00:07:34.258183 6701 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 00:07:34.258224 6701 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 00:07:34.258229 6701 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 00:07:34.258261 6701 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 00:07:34.258290 6701 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 00:07:34.258298 6701 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 00:07:34.258306 6701 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 00:07:34.258312 6701 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 00:07:34.258320 6701 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 00:07:34.260056 6701 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 00:07:34.260124 6701 factory.go:656] Stopping watch factory\\\\nI0224 00:07:34.260149 6701 ovnkube.go:599] Stopped ovnkube\\\\nI0224 00:07:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.272405 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs\") pod \"network-metrics-daemon-98z42\" (UID: \"a648113f-3e46-4170-ba30-7155fefbb413\") " pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:37 crc kubenswrapper[4824]: E0224 00:07:37.272600 4824 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:37 crc kubenswrapper[4824]: E0224 00:07:37.272670 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs podName:a648113f-3e46-4170-ba30-7155fefbb413 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:38.272652504 +0000 UTC m=+122.262276973 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs") pod "network-metrics-daemon-98z42" (UID: "a648113f-3e46-4170-ba30-7155fefbb413") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.339605 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" event={"ID":"0525cd89-44e0-47f1-856c-f566eb21596a","Type":"ContainerStarted","Data":"1f1d96e1625c71100e773b21982fde152e58d48d96841da879d6ab878342b1e7"} Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.339663 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" event={"ID":"0525cd89-44e0-47f1-856c-f566eb21596a","Type":"ContainerStarted","Data":"391a3f09eb431d4cc4af8eba60855591dd9fe90774ac0ecbf8ed112d41146fb6"} Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.358558 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.373353 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.390983 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.401546 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98z42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a648113f-3e46-4170-ba30-7155fefbb413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98z42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.419430 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.432651 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.446149 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.458264 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.476094 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:34Z\\\",\\\"message\\\":\\\"00:07:34.252477 6701 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 00:07:34.258107 6701 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 00:07:34.258158 6701 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 00:07:34.258177 6701 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 00:07:34.258183 6701 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 00:07:34.258224 6701 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 00:07:34.258229 6701 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 00:07:34.258261 6701 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 00:07:34.258290 6701 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 00:07:34.258298 6701 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 00:07:34.258306 6701 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 00:07:34.258312 6701 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 00:07:34.258320 6701 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 00:07:34.260056 6701 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 00:07:34.260124 6701 factory.go:656] Stopping watch factory\\\\nI0224 00:07:34.260149 6701 ovnkube.go:599] Stopped ovnkube\\\\nI0224 00:07:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.488368 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.503612 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.517337 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.534795 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.546933 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.559020 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0525cd89-44e0-47f1-856c-f566eb21596a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://391a3f09eb431d4cc4af8eba60855591dd9fe90774ac0ecbf8ed112d41146fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1d96e1625c71100e773b21982fde152e58d48d96841da879d6ab878342b1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbxhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.693122 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:37 crc kubenswrapper[4824]: E0224 00:07:37.693279 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:37 crc kubenswrapper[4824]: I0224 00:07:37.720590 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 22:07:55.576620853 +0000 UTC Feb 24 00:07:38 crc kubenswrapper[4824]: I0224 00:07:38.282217 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs\") pod \"network-metrics-daemon-98z42\" (UID: \"a648113f-3e46-4170-ba30-7155fefbb413\") " pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:38 crc kubenswrapper[4824]: E0224 00:07:38.282358 4824 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:38 crc kubenswrapper[4824]: E0224 00:07:38.282444 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs podName:a648113f-3e46-4170-ba30-7155fefbb413 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:40.282427054 +0000 UTC m=+124.272051513 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs") pod "network-metrics-daemon-98z42" (UID: "a648113f-3e46-4170-ba30-7155fefbb413") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:38 crc kubenswrapper[4824]: I0224 00:07:38.692727 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:38 crc kubenswrapper[4824]: I0224 00:07:38.692818 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:38 crc kubenswrapper[4824]: E0224 00:07:38.692871 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:38 crc kubenswrapper[4824]: I0224 00:07:38.692839 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:38 crc kubenswrapper[4824]: E0224 00:07:38.693005 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:38 crc kubenswrapper[4824]: E0224 00:07:38.693182 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:07:38 crc kubenswrapper[4824]: I0224 00:07:38.720863 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:28:15.024243713 +0000 UTC Feb 24 00:07:39 crc kubenswrapper[4824]: I0224 00:07:39.692949 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:39 crc kubenswrapper[4824]: E0224 00:07:39.693311 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:39 crc kubenswrapper[4824]: I0224 00:07:39.720955 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 00:18:38.12807262 +0000 UTC Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.302787 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs\") pod \"network-metrics-daemon-98z42\" (UID: \"a648113f-3e46-4170-ba30-7155fefbb413\") " pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:40 crc kubenswrapper[4824]: E0224 00:07:40.302989 4824 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:40 crc kubenswrapper[4824]: E0224 00:07:40.303104 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs podName:a648113f-3e46-4170-ba30-7155fefbb413 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:44.303076224 +0000 UTC m=+128.292700703 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs") pod "network-metrics-daemon-98z42" (UID: "a648113f-3e46-4170-ba30-7155fefbb413") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.362110 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.378603 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.393773 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.409841 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.425008 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.435831 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.447727 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0525cd89-44e0-47f1-856c-f566eb21596a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://391a3f09eb431d4cc4af8eba60855591dd9fe90774ac0ecbf8ed112d41146fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1d96e1625c71100e773b21982fde152e58d48d96841da879d6ab878342b1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbxhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.463536 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.476640 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.491985 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.504102 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98z42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a648113f-3e46-4170-ba30-7155fefbb413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98z42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.520198 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.533437 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.550174 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.563354 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.584589 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:34Z\\\",\\\"message\\\":\\\"00:07:34.252477 6701 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 00:07:34.258107 6701 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 00:07:34.258158 6701 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 00:07:34.258177 6701 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 00:07:34.258183 6701 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 00:07:34.258224 6701 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 00:07:34.258229 6701 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 00:07:34.258261 6701 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 00:07:34.258290 6701 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 00:07:34.258298 6701 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 00:07:34.258306 6701 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 00:07:34.258312 6701 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 00:07:34.258320 6701 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 00:07:34.260056 6701 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 00:07:34.260124 6701 factory.go:656] Stopping watch factory\\\\nI0224 00:07:34.260149 6701 ovnkube.go:599] Stopped ovnkube\\\\nI0224 00:07:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.693606 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.693762 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:40 crc kubenswrapper[4824]: E0224 00:07:40.693847 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.693880 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:40 crc kubenswrapper[4824]: E0224 00:07:40.694028 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:40 crc kubenswrapper[4824]: E0224 00:07:40.694148 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:40 crc kubenswrapper[4824]: I0224 00:07:40.722002 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 10:48:24.293246484 +0000 UTC Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.475288 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.475355 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.475372 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.475403 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.475421 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:41Z","lastTransitionTime":"2026-02-24T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:41 crc kubenswrapper[4824]: E0224 00:07:41.492475 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.497415 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.497453 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.497464 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.497483 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.497496 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:41Z","lastTransitionTime":"2026-02-24T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:41 crc kubenswrapper[4824]: E0224 00:07:41.511257 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.515961 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.516012 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.516023 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.516045 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.516059 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:41Z","lastTransitionTime":"2026-02-24T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:41 crc kubenswrapper[4824]: E0224 00:07:41.535261 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.539774 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.539826 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.539837 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.539857 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.539870 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:41Z","lastTransitionTime":"2026-02-24T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:41 crc kubenswrapper[4824]: E0224 00:07:41.556022 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.560197 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.560233 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.560243 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.560263 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.560274 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:41Z","lastTransitionTime":"2026-02-24T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:41 crc kubenswrapper[4824]: E0224 00:07:41.579369 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:41Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:41 crc kubenswrapper[4824]: E0224 00:07:41.579567 4824 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.693256 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:41 crc kubenswrapper[4824]: E0224 00:07:41.693453 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:41 crc kubenswrapper[4824]: I0224 00:07:41.723195 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:55:37.720976079 +0000 UTC Feb 24 00:07:41 crc kubenswrapper[4824]: E0224 00:07:41.773743 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:07:42 crc kubenswrapper[4824]: I0224 00:07:42.693489 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:42 crc kubenswrapper[4824]: I0224 00:07:42.693610 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:42 crc kubenswrapper[4824]: I0224 00:07:42.693793 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:42 crc kubenswrapper[4824]: E0224 00:07:42.693776 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:42 crc kubenswrapper[4824]: E0224 00:07:42.693970 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:07:42 crc kubenswrapper[4824]: E0224 00:07:42.694150 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:42 crc kubenswrapper[4824]: I0224 00:07:42.723861 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:30:19.499639207 +0000 UTC Feb 24 00:07:43 crc kubenswrapper[4824]: I0224 00:07:43.692960 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:43 crc kubenswrapper[4824]: E0224 00:07:43.693137 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:43 crc kubenswrapper[4824]: I0224 00:07:43.724881 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:23:35.46281603 +0000 UTC Feb 24 00:07:44 crc kubenswrapper[4824]: I0224 00:07:44.349525 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs\") pod \"network-metrics-daemon-98z42\" (UID: \"a648113f-3e46-4170-ba30-7155fefbb413\") " pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:44 crc kubenswrapper[4824]: E0224 00:07:44.349707 4824 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:44 crc kubenswrapper[4824]: E0224 00:07:44.349809 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs podName:a648113f-3e46-4170-ba30-7155fefbb413 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:52.349782714 +0000 UTC m=+136.339407183 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs") pod "network-metrics-daemon-98z42" (UID: "a648113f-3e46-4170-ba30-7155fefbb413") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:44 crc kubenswrapper[4824]: I0224 00:07:44.695684 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:44 crc kubenswrapper[4824]: E0224 00:07:44.695827 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:44 crc kubenswrapper[4824]: I0224 00:07:44.696028 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:44 crc kubenswrapper[4824]: E0224 00:07:44.696093 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:07:44 crc kubenswrapper[4824]: I0224 00:07:44.696372 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:44 crc kubenswrapper[4824]: E0224 00:07:44.696436 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:44 crc kubenswrapper[4824]: I0224 00:07:44.725239 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 14:07:51.424252425 +0000 UTC Feb 24 00:07:45 crc kubenswrapper[4824]: I0224 00:07:45.693092 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:45 crc kubenswrapper[4824]: E0224 00:07:45.693268 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:45 crc kubenswrapper[4824]: I0224 00:07:45.726164 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 17:24:09.116770044 +0000 UTC Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.693439 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.693550 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:46 crc kubenswrapper[4824]: E0224 00:07:46.693642 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.693455 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:46 crc kubenswrapper[4824]: E0224 00:07:46.693725 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:46 crc kubenswrapper[4824]: E0224 00:07:46.693819 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.713790 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.727033 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 22:35:47.501804223 +0000 UTC Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.743697 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.758803 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.770461 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98z42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a648113f-3e46-4170-ba30-7155fefbb413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98z42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:46 crc kubenswrapper[4824]: E0224 00:07:46.775302 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.787715 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.806472 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.819620 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.831701 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.851591 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:34Z\\\",\\\"message\\\":\\\"00:07:34.252477 6701 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 00:07:34.258107 6701 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 00:07:34.258158 6701 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 00:07:34.258177 6701 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 00:07:34.258183 6701 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 00:07:34.258224 6701 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 00:07:34.258229 6701 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 00:07:34.258261 6701 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 00:07:34.258290 6701 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 00:07:34.258298 6701 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 00:07:34.258306 6701 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 00:07:34.258312 6701 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 00:07:34.258320 6701 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 00:07:34.260056 6701 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 00:07:34.260124 6701 factory.go:656] Stopping watch factory\\\\nI0224 00:07:34.260149 6701 ovnkube.go:599] Stopped ovnkube\\\\nI0224 00:07:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.863725 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.873237 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.884043 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.901185 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.912580 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:46 crc kubenswrapper[4824]: I0224 00:07:46.922327 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0525cd89-44e0-47f1-856c-f566eb21596a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://391a3f09eb431d4cc4af8eba60855591dd9fe90774ac0ecbf8ed112d41146fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1d96e1625c71100e773b21982fde152e58d48d96841da879d6ab878342b1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbxhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:46Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:47 crc kubenswrapper[4824]: I0224 00:07:47.693401 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:47 crc kubenswrapper[4824]: E0224 00:07:47.693640 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:47 crc kubenswrapper[4824]: I0224 00:07:47.727873 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 00:44:10.66885325 +0000 UTC Feb 24 00:07:48 crc kubenswrapper[4824]: I0224 00:07:48.692947 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:48 crc kubenswrapper[4824]: E0224 00:07:48.693130 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:48 crc kubenswrapper[4824]: I0224 00:07:48.693384 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:48 crc kubenswrapper[4824]: E0224 00:07:48.693463 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:07:48 crc kubenswrapper[4824]: I0224 00:07:48.693628 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:48 crc kubenswrapper[4824]: E0224 00:07:48.693696 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:48 crc kubenswrapper[4824]: I0224 00:07:48.728394 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 07:01:21.213513807 +0000 UTC Feb 24 00:07:49 crc kubenswrapper[4824]: I0224 00:07:49.692973 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:49 crc kubenswrapper[4824]: E0224 00:07:49.693472 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:49 crc kubenswrapper[4824]: I0224 00:07:49.729837 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:58:18.636463878 +0000 UTC Feb 24 00:07:50 crc kubenswrapper[4824]: I0224 00:07:50.693724 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:50 crc kubenswrapper[4824]: I0224 00:07:50.693804 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:50 crc kubenswrapper[4824]: I0224 00:07:50.693887 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:50 crc kubenswrapper[4824]: E0224 00:07:50.693981 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:07:50 crc kubenswrapper[4824]: E0224 00:07:50.694565 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:50 crc kubenswrapper[4824]: E0224 00:07:50.694882 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:50 crc kubenswrapper[4824]: I0224 00:07:50.730024 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 06:42:41.104037648 +0000 UTC Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.693425 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:51 crc kubenswrapper[4824]: E0224 00:07:51.694598 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.695086 4824 scope.go:117] "RemoveContainer" containerID="9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.730194 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 07:11:05.373353708 +0000 UTC Feb 24 00:07:51 crc kubenswrapper[4824]: E0224 00:07:51.777425 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.849737 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.850165 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.850440 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.850696 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.850923 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:51Z","lastTransitionTime":"2026-02-24T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:51 crc kubenswrapper[4824]: E0224 00:07:51.869289 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.874981 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.875028 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.875040 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.875060 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.875075 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:51Z","lastTransitionTime":"2026-02-24T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:51 crc kubenswrapper[4824]: E0224 00:07:51.892775 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.897950 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.898201 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.898298 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.898444 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.898589 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:51Z","lastTransitionTime":"2026-02-24T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:51 crc kubenswrapper[4824]: E0224 00:07:51.922046 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.927997 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.928056 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.928079 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.928104 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.928122 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:51Z","lastTransitionTime":"2026-02-24T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:51 crc kubenswrapper[4824]: E0224 00:07:51.945388 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.951202 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.951255 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.951273 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.951297 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:51 crc kubenswrapper[4824]: I0224 00:07:51.951312 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:51Z","lastTransitionTime":"2026-02-24T00:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:51 crc kubenswrapper[4824]: E0224 00:07:51.965340 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:51Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:51 crc kubenswrapper[4824]: E0224 00:07:51.965811 4824 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:07:52 crc kubenswrapper[4824]: I0224 00:07:52.395480 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovnkube-controller/1.log" Feb 24 00:07:52 crc kubenswrapper[4824]: I0224 00:07:52.398794 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerStarted","Data":"76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba"} Feb 24 00:07:52 crc kubenswrapper[4824]: I0224 00:07:52.439516 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs\") pod \"network-metrics-daemon-98z42\" (UID: \"a648113f-3e46-4170-ba30-7155fefbb413\") " pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:52 crc kubenswrapper[4824]: E0224 00:07:52.439735 4824 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:52 crc kubenswrapper[4824]: E0224 00:07:52.439866 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs podName:a648113f-3e46-4170-ba30-7155fefbb413 nodeName:}" failed. No retries permitted until 2026-02-24 00:08:08.439833321 +0000 UTC m=+152.429457800 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs") pod "network-metrics-daemon-98z42" (UID: "a648113f-3e46-4170-ba30-7155fefbb413") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:52 crc kubenswrapper[4824]: I0224 00:07:52.693172 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:52 crc kubenswrapper[4824]: I0224 00:07:52.693237 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:52 crc kubenswrapper[4824]: E0224 00:07:52.693323 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:52 crc kubenswrapper[4824]: I0224 00:07:52.693559 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:52 crc kubenswrapper[4824]: E0224 00:07:52.693557 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:07:52 crc kubenswrapper[4824]: E0224 00:07:52.693605 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:52 crc kubenswrapper[4824]: I0224 00:07:52.705244 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 24 00:07:52 crc kubenswrapper[4824]: I0224 00:07:52.731388 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 16:27:01.63347539 +0000 UTC Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.403920 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovnkube-controller/2.log" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.404426 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovnkube-controller/1.log" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.406913 4824 generic.go:334] "Generic (PLEG): container finished" podID="d985b875-dd5e-4767-a4e2-209894575a8f" containerID="76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba" exitCode=1 Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.407017 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerDied","Data":"76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba"} Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.407114 4824 scope.go:117] "RemoveContainer" containerID="9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.407622 4824 scope.go:117] "RemoveContainer" containerID="76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba" Feb 24 00:07:53 crc kubenswrapper[4824]: E0224 00:07:53.407749 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.427020 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0525cd89-44e0-47f1-856c-f566eb21596a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://391a3f09eb431d4cc4af8eba60855591dd9fe90774ac0ecbf8ed112d41146fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1d96e1625c71100e773b21982fde152e58d48d96841da879d6ab878342b1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbxhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.442040 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56aba328-bba3-447a-9fc1-9ed04311acf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd599f0dec3d074b172aab55b3b74283b37d4dddc2e6a97b7319e8d7bd15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://357f08b7a73fd26e3a3c8232ff5641ec70da20551310ce9348d3b1462a30735c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52d213e230e92cadfb65d0efd8b4dea058e9f947897ad48c9c9d123ed37263ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.489565 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.507868 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.529780 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.545469 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.555203 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.570175 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.583511 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.597556 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.608364 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98z42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a648113f-3e46-4170-ba30-7155fefbb413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98z42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.622843 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.636058 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.650298 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.664507 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.677221 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.684351 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d676f945e46c0757bafb1cce26c0a049ef7be697902fb79cb2fc18454a13952\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:34Z\\\",\\\"message\\\":\\\"00:07:34.252477 6701 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 00:07:34.258107 6701 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 00:07:34.258158 6701 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 00:07:34.258177 6701 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 00:07:34.258183 6701 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 00:07:34.258224 6701 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0224 00:07:34.258229 6701 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0224 00:07:34.258261 6701 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0224 00:07:34.258290 6701 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0224 00:07:34.258298 6701 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 00:07:34.258306 6701 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 00:07:34.258312 6701 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 00:07:34.258320 6701 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0224 00:07:34.260056 6701 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 00:07:34.260124 6701 factory.go:656] Stopping watch factory\\\\nI0224 00:07:34.260149 6701 ovnkube.go:599] Stopped ovnkube\\\\nI0224 00:07:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:53Z\\\",\\\"message\\\":\\\"61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0224 00:07:53.258981 6959 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-98z42] creating logical port openshift-multus_network-metrics-daemon-98z42 for pod on switch crc\\\\nI0224 00:07:53.258997 6959 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh in node crc\\\\nI0224 00:07:53.259010 6959 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh after 0 failed attempt(s)\\\\nI0224 00:07:53.259021 6959 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh\\\\nI0224 00:07:53.258757 6959 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-vcbgn in node crc\\\\nI0224 00:07:53.259038 6959 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-vcbgn after 0 failed attempt(s)\\\\nI0224 00:07:53.259043 6959 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-vcbgn\\\\nI0224 00:07:53.259043 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.693457 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:53 crc kubenswrapper[4824]: E0224 00:07:53.693719 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:53 crc kubenswrapper[4824]: I0224 00:07:53.731975 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:56:06.311382255 +0000 UTC Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.414290 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovnkube-controller/2.log" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.420746 4824 scope.go:117] "RemoveContainer" containerID="76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba" Feb 24 00:07:54 crc kubenswrapper[4824]: E0224 00:07:54.421080 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.443816 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:54Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.462259 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98z42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a648113f-3e46-4170-ba30-7155fefbb413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98z42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:54Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.484936 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:54Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.502793 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:54Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.523984 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:54Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.543986 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:54Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.574633 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:53Z\\\",\\\"message\\\":\\\"61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0224 00:07:53.258981 6959 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-98z42] creating logical port openshift-multus_network-metrics-daemon-98z42 for pod on switch crc\\\\nI0224 00:07:53.258997 6959 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh in node crc\\\\nI0224 00:07:53.259010 6959 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh after 0 failed attempt(s)\\\\nI0224 00:07:53.259021 6959 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh\\\\nI0224 00:07:53.258757 6959 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-vcbgn in node crc\\\\nI0224 00:07:53.259038 6959 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-vcbgn after 0 failed attempt(s)\\\\nI0224 00:07:53.259043 6959 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-vcbgn\\\\nI0224 00:07:53.259043 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:54Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.596542 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:54Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.613617 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:54Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.634934 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:54Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.653647 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:54Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.666116 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:54Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.679231 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0525cd89-44e0-47f1-856c-f566eb21596a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://391a3f09eb431d4cc4af8eba60855591dd9fe90774ac0ecbf8ed112d41146fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1d96e1625c71100e773b21982fde152e58d48d96841da879d6ab878342b1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbxhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:54Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.693865 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:54 crc kubenswrapper[4824]: E0224 00:07:54.694473 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.694013 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:54 crc kubenswrapper[4824]: E0224 00:07:54.694581 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.693877 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:54 crc kubenswrapper[4824]: E0224 00:07:54.694651 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.700741 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56aba328-bba3-447a-9fc1-9ed04311acf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd599f0dec3d074b172aab55b3b74283b37d4dddc2e6a97b7319e8d7bd15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://357f08b7a73fd26e3a3c8232ff5641ec70da20551310ce9348d3b1462a30735c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52d213e230e92cadfb65d0efd8b4dea058e9f947897ad48c9c9d123ed37263ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:54Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.720340 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:54Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.733631 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:53:44.96743377 +0000 UTC Feb 24 00:07:54 crc kubenswrapper[4824]: I0224 00:07:54.734710 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:54Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:55 crc kubenswrapper[4824]: I0224 00:07:55.425073 4824 scope.go:117] "RemoveContainer" containerID="76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba" Feb 24 00:07:55 crc kubenswrapper[4824]: E0224 00:07:55.425776 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" Feb 24 00:07:55 crc kubenswrapper[4824]: I0224 00:07:55.692818 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:55 crc kubenswrapper[4824]: E0224 00:07:55.692980 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:55 crc kubenswrapper[4824]: I0224 00:07:55.734214 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:20:53.828930348 +0000 UTC Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.693998 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.694134 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.695014 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:56 crc kubenswrapper[4824]: E0224 00:07:56.695309 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:56 crc kubenswrapper[4824]: E0224 00:07:56.695452 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:07:56 crc kubenswrapper[4824]: E0224 00:07:56.695675 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.713166 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.725789 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.735343 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 10:10:44.618526587 +0000 UTC Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.743249 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.760961 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0525cd89-44e0-47f1-856c-f566eb21596a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://391a3f09eb431d4cc4af8eba60855591dd9fe90774ac0ecbf8ed112d41146fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1d96e1625c71100e773b21982fde152e58d48d96841da879d6ab878342b1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbxhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:56 crc kubenswrapper[4824]: E0224 00:07:56.778656 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.779869 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56aba328-bba3-447a-9fc1-9ed04311acf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd599f0dec3d074b172aab55b3b74283b37d4dddc2e6a97b7319e8d7bd15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://357f08b7a73fd26e3a3c8232ff5641ec70da20551310ce9348d3b1462a30735c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52d213e230e92cadfb65d0efd8b4dea058e9f947897ad48c9c9d123ed37263ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.793314 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.805457 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.817651 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.830384 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.846081 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.857736 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.869409 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98z42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a648113f-3e46-4170-ba30-7155fefbb413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98z42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.881288 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.900994 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:53Z\\\",\\\"message\\\":\\\"61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0224 00:07:53.258981 6959 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-98z42] creating logical port openshift-multus_network-metrics-daemon-98z42 for pod on switch crc\\\\nI0224 00:07:53.258997 6959 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh in node crc\\\\nI0224 00:07:53.259010 6959 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh after 0 failed attempt(s)\\\\nI0224 00:07:53.259021 6959 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh\\\\nI0224 00:07:53.258757 6959 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-vcbgn in node crc\\\\nI0224 00:07:53.259038 6959 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-vcbgn after 0 failed attempt(s)\\\\nI0224 00:07:53.259043 6959 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-vcbgn\\\\nI0224 00:07:53.259043 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.919714 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.933593 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:56 crc kubenswrapper[4824]: I0224 00:07:56.945977 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:56Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:57 crc kubenswrapper[4824]: I0224 00:07:57.693705 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:57 crc kubenswrapper[4824]: E0224 00:07:57.693910 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:57 crc kubenswrapper[4824]: I0224 00:07:57.736057 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 07:41:53.129771298 +0000 UTC Feb 24 00:07:58 crc kubenswrapper[4824]: I0224 00:07:58.693294 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:07:58 crc kubenswrapper[4824]: E0224 00:07:58.693438 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:07:58 crc kubenswrapper[4824]: I0224 00:07:58.693298 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:58 crc kubenswrapper[4824]: E0224 00:07:58.693674 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:58 crc kubenswrapper[4824]: I0224 00:07:58.693875 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:58 crc kubenswrapper[4824]: E0224 00:07:58.693942 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:58 crc kubenswrapper[4824]: I0224 00:07:58.736557 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 06:12:15.611325744 +0000 UTC Feb 24 00:07:59 crc kubenswrapper[4824]: I0224 00:07:59.692715 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:59 crc kubenswrapper[4824]: E0224 00:07:59.692898 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:59 crc kubenswrapper[4824]: I0224 00:07:59.737656 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 17:47:37.308748108 +0000 UTC Feb 24 00:08:00 crc kubenswrapper[4824]: I0224 00:08:00.693092 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:00 crc kubenswrapper[4824]: I0224 00:08:00.693203 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:00 crc kubenswrapper[4824]: E0224 00:08:00.693250 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:00 crc kubenswrapper[4824]: E0224 00:08:00.693367 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:00 crc kubenswrapper[4824]: I0224 00:08:00.693092 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:00 crc kubenswrapper[4824]: E0224 00:08:00.693696 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:00 crc kubenswrapper[4824]: I0224 00:08:00.738657 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:33:30.245790217 +0000 UTC Feb 24 00:08:01 crc kubenswrapper[4824]: I0224 00:08:01.693281 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:01 crc kubenswrapper[4824]: E0224 00:08:01.693624 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:01 crc kubenswrapper[4824]: I0224 00:08:01.704112 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 24 00:08:01 crc kubenswrapper[4824]: I0224 00:08:01.739400 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:23:56.344739211 +0000 UTC Feb 24 00:08:01 crc kubenswrapper[4824]: E0224 00:08:01.780595 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.196448 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.196494 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.196504 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.196540 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.196561 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:02Z","lastTransitionTime":"2026-02-24T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:02 crc kubenswrapper[4824]: E0224 00:08:02.213535 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.217857 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.217915 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.217933 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.217955 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.217973 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:02Z","lastTransitionTime":"2026-02-24T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:02 crc kubenswrapper[4824]: E0224 00:08:02.237097 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.241826 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.241878 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.241890 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.241908 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.241920 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:02Z","lastTransitionTime":"2026-02-24T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:02 crc kubenswrapper[4824]: E0224 00:08:02.257413 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.262562 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.262612 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.262622 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.262640 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.262652 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:02Z","lastTransitionTime":"2026-02-24T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:02 crc kubenswrapper[4824]: E0224 00:08:02.277110 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.282018 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.282063 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.282073 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.282090 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.282101 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:02Z","lastTransitionTime":"2026-02-24T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:02 crc kubenswrapper[4824]: E0224 00:08:02.295356 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:02Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:02 crc kubenswrapper[4824]: E0224 00:08:02.295465 4824 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.693831 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:02 crc kubenswrapper[4824]: E0224 00:08:02.694025 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.694658 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:02 crc kubenswrapper[4824]: E0224 00:08:02.694796 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.694870 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:02 crc kubenswrapper[4824]: E0224 00:08:02.695031 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:02 crc kubenswrapper[4824]: I0224 00:08:02.756148 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 16:25:05.65268619 +0000 UTC Feb 24 00:08:03 crc kubenswrapper[4824]: I0224 00:08:03.693390 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:03 crc kubenswrapper[4824]: E0224 00:08:03.693913 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:03 crc kubenswrapper[4824]: I0224 00:08:03.756946 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 08:56:00.693843529 +0000 UTC Feb 24 00:08:04 crc kubenswrapper[4824]: I0224 00:08:04.685205 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:04 crc kubenswrapper[4824]: E0224 00:08:04.685384 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:08.685354711 +0000 UTC m=+212.674979190 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:04 crc kubenswrapper[4824]: I0224 00:08:04.685975 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:04 crc kubenswrapper[4824]: I0224 00:08:04.686140 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:04 crc kubenswrapper[4824]: I0224 00:08:04.686446 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:04 crc kubenswrapper[4824]: I0224 00:08:04.686680 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:04 crc kubenswrapper[4824]: E0224 00:08:04.686282 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:08:04 crc kubenswrapper[4824]: E0224 00:08:04.686939 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:08:04 crc kubenswrapper[4824]: E0224 00:08:04.687120 4824 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:08:04 crc kubenswrapper[4824]: E0224 00:08:04.686383 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:08:04 crc kubenswrapper[4824]: E0224 00:08:04.687321 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:08:04 crc kubenswrapper[4824]: E0224 00:08:04.687356 4824 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:08:04 crc kubenswrapper[4824]: E0224 00:08:04.686632 4824 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:08:04 crc kubenswrapper[4824]: E0224 00:08:04.686824 4824 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:08:04 crc kubenswrapper[4824]: E0224 00:08:04.687247 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 00:09:08.687234543 +0000 UTC m=+212.676859012 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:08:04 crc kubenswrapper[4824]: E0224 00:08:04.687568 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:09:08.68749083 +0000 UTC m=+212.677115339 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:08:04 crc kubenswrapper[4824]: E0224 00:08:04.687609 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 00:09:08.687591653 +0000 UTC m=+212.677216242 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:08:04 crc kubenswrapper[4824]: E0224 00:08:04.687644 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:09:08.687626034 +0000 UTC m=+212.677250653 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:08:04 crc kubenswrapper[4824]: I0224 00:08:04.693147 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:04 crc kubenswrapper[4824]: I0224 00:08:04.693203 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:04 crc kubenswrapper[4824]: I0224 00:08:04.693161 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:04 crc kubenswrapper[4824]: E0224 00:08:04.693411 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:04 crc kubenswrapper[4824]: E0224 00:08:04.693493 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:04 crc kubenswrapper[4824]: E0224 00:08:04.693595 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:04 crc kubenswrapper[4824]: I0224 00:08:04.757493 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 17:25:55.777963944 +0000 UTC Feb 24 00:08:05 crc kubenswrapper[4824]: I0224 00:08:05.693075 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:05 crc kubenswrapper[4824]: E0224 00:08:05.693230 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:05 crc kubenswrapper[4824]: I0224 00:08:05.758769 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:32:27.767050168 +0000 UTC Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.693037 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.693144 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:06 crc kubenswrapper[4824]: E0224 00:08:06.693253 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:06 crc kubenswrapper[4824]: E0224 00:08:06.694175 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.694182 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:06 crc kubenswrapper[4824]: E0224 00:08:06.694343 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.696443 4824 scope.go:117] "RemoveContainer" containerID="76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba" Feb 24 00:08:06 crc kubenswrapper[4824]: E0224 00:08:06.697613 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.721353 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.745201 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc53949c-a6a4-48c1-a312-cc8d39a3238f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://706813f751a53e8e88d84668245bb4b94bc39d6b611f0fed9f774c226dc8c632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fe1cb1e114c1440630a33e3dd127af3600c520e37148cc01ab2e4ed346941ba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:33Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:06:03.976490 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:06:03.977317 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:06:03.977962 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:06:03.978570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:33.531382 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:33.531507 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:33Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:03Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45746da193a3aaa64e64067e2324afb4e4fb1a90a1357461ed97c0bbb5109f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf86745569172505df6632421bd1587317cb06e26e70937563bd7d0341c6086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213ad3b341ace3f4473aa48ecaaa41814fe670417431f6d5cd04be03482e597c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.759321 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 20:10:48.062486642 +0000 UTC Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.762558 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.779714 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:06 crc kubenswrapper[4824]: E0224 00:08:06.781713 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.796155 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98z42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a648113f-3e46-4170-ba30-7155fefbb413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98z42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.810478 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17795ce3-5917-4f02-a513-de53b7c702cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd198d5e4adeaa8e23f5262bc17476b82b1c76b3bfd06b385590eede8c0baa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370e38e80ea6eb0a1add2e84d515cef4e8aeb77dc43beba4d0c3fdeb871f4fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370e38e80ea6eb0a1add2e84d515cef4e8aeb77dc43beba4d0c3fdeb871f4fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.829772 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.844089 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.859204 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.876695 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.900868 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:53Z\\\",\\\"message\\\":\\\"61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0224 00:07:53.258981 6959 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-98z42] creating logical port openshift-multus_network-metrics-daemon-98z42 for pod on switch crc\\\\nI0224 00:07:53.258997 6959 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh in node crc\\\\nI0224 00:07:53.259010 6959 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh after 0 failed attempt(s)\\\\nI0224 00:07:53.259021 6959 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh\\\\nI0224 00:07:53.258757 6959 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-vcbgn in node crc\\\\nI0224 00:07:53.259038 6959 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-vcbgn after 0 failed attempt(s)\\\\nI0224 00:07:53.259043 6959 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-vcbgn\\\\nI0224 00:07:53.259043 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.919404 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.936299 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.951072 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.970749 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.982384 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:06 crc kubenswrapper[4824]: I0224 00:08:06.995883 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0525cd89-44e0-47f1-856c-f566eb21596a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://391a3f09eb431d4cc4af8eba60855591dd9fe90774ac0ecbf8ed112d41146fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1d96e1625c71100e773b21982fde152e58d48d96841da879d6ab878342b1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbxhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:07 crc kubenswrapper[4824]: I0224 00:08:07.012277 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56aba328-bba3-447a-9fc1-9ed04311acf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd599f0dec3d074b172aab55b3b74283b37d4dddc2e6a97b7319e8d7bd15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://357f08b7a73fd26e3a3c8232ff5641ec70da20551310ce9348d3b1462a30735c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52d213e230e92cadfb65d0efd8b4dea058e9f947897ad48c9c9d123ed37263ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:07Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:07 crc kubenswrapper[4824]: I0224 00:08:07.693393 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:07 crc kubenswrapper[4824]: E0224 00:08:07.693612 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:07 crc kubenswrapper[4824]: I0224 00:08:07.759921 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 07:43:33.380567384 +0000 UTC Feb 24 00:08:08 crc kubenswrapper[4824]: I0224 00:08:08.532461 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs\") pod \"network-metrics-daemon-98z42\" (UID: \"a648113f-3e46-4170-ba30-7155fefbb413\") " pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:08 crc kubenswrapper[4824]: E0224 00:08:08.532656 4824 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:08:08 crc kubenswrapper[4824]: E0224 00:08:08.532723 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs podName:a648113f-3e46-4170-ba30-7155fefbb413 nodeName:}" failed. No retries permitted until 2026-02-24 00:08:40.532706575 +0000 UTC m=+184.522331034 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs") pod "network-metrics-daemon-98z42" (UID: "a648113f-3e46-4170-ba30-7155fefbb413") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:08:08 crc kubenswrapper[4824]: I0224 00:08:08.693292 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:08 crc kubenswrapper[4824]: I0224 00:08:08.693367 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:08 crc kubenswrapper[4824]: I0224 00:08:08.693390 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:08 crc kubenswrapper[4824]: E0224 00:08:08.693460 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:08 crc kubenswrapper[4824]: E0224 00:08:08.693645 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:08 crc kubenswrapper[4824]: E0224 00:08:08.693791 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:08 crc kubenswrapper[4824]: I0224 00:08:08.760840 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 09:02:03.311170118 +0000 UTC Feb 24 00:08:09 crc kubenswrapper[4824]: I0224 00:08:09.693092 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:09 crc kubenswrapper[4824]: E0224 00:08:09.693293 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:09 crc kubenswrapper[4824]: I0224 00:08:09.761780 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:58:50.995611414 +0000 UTC Feb 24 00:08:10 crc kubenswrapper[4824]: I0224 00:08:10.693909 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:10 crc kubenswrapper[4824]: I0224 00:08:10.694003 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:10 crc kubenswrapper[4824]: I0224 00:08:10.693910 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:10 crc kubenswrapper[4824]: E0224 00:08:10.694923 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:10 crc kubenswrapper[4824]: E0224 00:08:10.695050 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:10 crc kubenswrapper[4824]: E0224 00:08:10.695102 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:10 crc kubenswrapper[4824]: I0224 00:08:10.711012 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 24 00:08:10 crc kubenswrapper[4824]: I0224 00:08:10.762742 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 23:13:43.047161668 +0000 UTC Feb 24 00:08:11 crc kubenswrapper[4824]: I0224 00:08:11.692763 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:11 crc kubenswrapper[4824]: E0224 00:08:11.693306 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:11 crc kubenswrapper[4824]: I0224 00:08:11.763247 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:51:59.134108015 +0000 UTC Feb 24 00:08:11 crc kubenswrapper[4824]: E0224 00:08:11.783374 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.343132 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.343542 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.343633 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.343729 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.343812 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:12Z","lastTransitionTime":"2026-02-24T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:12 crc kubenswrapper[4824]: E0224 00:08:12.361082 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.365511 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.365600 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.365609 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.365627 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.365657 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:12Z","lastTransitionTime":"2026-02-24T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:12 crc kubenswrapper[4824]: E0224 00:08:12.383943 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.388672 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.388719 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.388731 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.388775 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.388790 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:12Z","lastTransitionTime":"2026-02-24T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:12 crc kubenswrapper[4824]: E0224 00:08:12.405194 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.409936 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.409963 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.409972 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.409988 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.409997 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:12Z","lastTransitionTime":"2026-02-24T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:12 crc kubenswrapper[4824]: E0224 00:08:12.425259 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.431077 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.431129 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.431141 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.431163 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.431175 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:12Z","lastTransitionTime":"2026-02-24T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:12 crc kubenswrapper[4824]: E0224 00:08:12.452638 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: E0224 00:08:12.452839 4824 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.486074 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wvqfl_15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac/kube-multus/0.log" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.486165 4824 generic.go:334] "Generic (PLEG): container finished" podID="15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac" containerID="4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d" exitCode=1 Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.486221 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wvqfl" event={"ID":"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac","Type":"ContainerDied","Data":"4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d"} Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.487016 4824 scope.go:117] "RemoveContainer" containerID="4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.505680 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.527944 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.541661 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.554741 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0525cd89-44e0-47f1-856c-f566eb21596a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://391a3f09eb431d4cc4af8eba60855591dd9fe90774ac0ecbf8ed112d41146fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1d96e1625c71100e773b21982fde152e58d48d96841da879d6ab878342b1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbxhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.568092 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56aba328-bba3-447a-9fc1-9ed04311acf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd599f0dec3d074b172aab55b3b74283b37d4dddc2e6a97b7319e8d7bd15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://357f08b7a73fd26e3a3c8232ff5641ec70da20551310ce9348d3b1462a30735c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52d213e230e92cadfb65d0efd8b4dea058e9f947897ad48c9c9d123ed37263ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.584030 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.598452 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.616047 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc53949c-a6a4-48c1-a312-cc8d39a3238f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://706813f751a53e8e88d84668245bb4b94bc39d6b611f0fed9f774c226dc8c632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fe1cb1e114c1440630a33e3dd127af3600c520e37148cc01ab2e4ed346941ba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:33Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:06:03.976490 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:06:03.977317 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:06:03.977962 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:06:03.978570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:33.531382 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:33.531507 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:33Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:03Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45746da193a3aaa64e64067e2324afb4e4fb1a90a1357461ed97c0bbb5109f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf86745569172505df6632421bd1587317cb06e26e70937563bd7d0341c6086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213ad3b341ace3f4473aa48ecaaa41814fe670417431f6d5cd04be03482e597c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.631170 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.644944 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98z42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a648113f-3e46-4170-ba30-7155fefbb413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98z42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.658441 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17795ce3-5917-4f02-a513-de53b7c702cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd198d5e4adeaa8e23f5262bc17476b82b1c76b3bfd06b385590eede8c0baa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370e38e80ea6eb0a1add2e84d515cef4e8aeb77dc43beba4d0c3fdeb871f4fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370e38e80ea6eb0a1add2e84d515cef4e8aeb77dc43beba4d0c3fdeb871f4fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.675535 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.689319 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.692998 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:12 crc kubenswrapper[4824]: E0224 00:08:12.693112 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.693281 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:12 crc kubenswrapper[4824]: E0224 00:08:12.693339 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.693511 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:12 crc kubenswrapper[4824]: E0224 00:08:12.693740 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.703302 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.718682 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:08:11Z\\\",\\\"message\\\":\\\"2026-02-24T00:07:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_61fbc54a-734f-43d4-9941-0e38fbccf016\\\\n2026-02-24T00:07:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_61fbc54a-734f-43d4-9941-0e38fbccf016 to /host/opt/cni/bin/\\\\n2026-02-24T00:07:26Z [verbose] multus-daemon started\\\\n2026-02-24T00:07:26Z [verbose] Readiness Indicator file check\\\\n2026-02-24T00:08:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.742471 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:53Z\\\",\\\"message\\\":\\\"61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0224 00:07:53.258981 6959 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-98z42] creating logical port openshift-multus_network-metrics-daemon-98z42 for pod on switch crc\\\\nI0224 00:07:53.258997 6959 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh in node crc\\\\nI0224 00:07:53.259010 6959 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh after 0 failed attempt(s)\\\\nI0224 00:07:53.259021 6959 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh\\\\nI0224 00:07:53.258757 6959 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-vcbgn in node crc\\\\nI0224 00:07:53.259038 6959 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-vcbgn after 0 failed attempt(s)\\\\nI0224 00:07:53.259043 6959 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-vcbgn\\\\nI0224 00:07:53.259043 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.761869 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23aab44a-c0da-4344-b8b1-7c754b00d6d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bbd80285850cb217ec0994ab8841efb660fd2488077ee4968b0c1f5d156fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7244717b123b325ac83f68b976c1e5761a76b1aacac6aab471d0b644386251d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ce8e051c69710768e7b4fba06cc91026ce4d25baf35cd8f9235bfd348a4451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://837b5de339793bc363199eae6f141038bf069b2832f6375a8e01d49bef7ea63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ede0889b30698004a8f288b9cc1bb7d00b194c21ff1936b2743f1e7f246e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9c6e044265812e8e2116a81efcc38023cc8da2b92093e3f75c6efa4e359c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9c6e044265812e8e2116a81efcc38023cc8da2b92093e3f75c6efa4e359c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d404a30b7d0515ae872dab9ec3e60629de532d6f27ab0c9f1e7e3918bad30e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d404a30b7d0515ae872dab9ec3e60629de532d6f27ab0c9f1e7e3918bad30e60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://513b48315443b87dc828225545ae02e1f47580596fc31a7935dd59b80065d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://513b48315443b87dc828225545ae02e1f47580596fc31a7935dd59b80065d26c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.763984 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 04:04:26.435515296 +0000 UTC Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.780865 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:12 crc kubenswrapper[4824]: I0224 00:08:12.798338 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:12Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.493660 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wvqfl_15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac/kube-multus/0.log" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.493743 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wvqfl" event={"ID":"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac","Type":"ContainerStarted","Data":"a39d567bc7ae642dd15ce2ea8650e1aa0fd3688e3c3c061ad145edf2bf963c06"} Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.515321 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0525cd89-44e0-47f1-856c-f566eb21596a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://391a3f09eb431d4cc4af8eba60855591dd9fe90774ac0ecbf8ed112d41146fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1d96e1625c71100e773b21982fde152e58d48d96841da879d6ab878342b1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbxhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.532106 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56aba328-bba3-447a-9fc1-9ed04311acf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd599f0dec3d074b172aab55b3b74283b37d4dddc2e6a97b7319e8d7bd15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://357f08b7a73fd26e3a3c8232ff5641ec70da20551310ce9348d3b1462a30735c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52d213e230e92cadfb65d0efd8b4dea058e9f947897ad48c9c9d123ed37263ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.549314 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.565739 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.582292 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.600256 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.613256 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.627697 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc53949c-a6a4-48c1-a312-cc8d39a3238f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://706813f751a53e8e88d84668245bb4b94bc39d6b611f0fed9f774c226dc8c632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fe1cb1e114c1440630a33e3dd127af3600c520e37148cc01ab2e4ed346941ba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:33Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:06:03.976490 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:06:03.977317 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:06:03.977962 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:06:03.978570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:33.531382 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:33.531507 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:33Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:03Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45746da193a3aaa64e64067e2324afb4e4fb1a90a1357461ed97c0bbb5109f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf86745569172505df6632421bd1587317cb06e26e70937563bd7d0341c6086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213ad3b341ace3f4473aa48ecaaa41814fe670417431f6d5cd04be03482e597c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.641283 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.658568 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17795ce3-5917-4f02-a513-de53b7c702cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd198d5e4adeaa8e23f5262bc17476b82b1c76b3bfd06b385590eede8c0baa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370e38e80ea6eb0a1add2e84d515cef4e8aeb77dc43beba4d0c3fdeb871f4fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370e38e80ea6eb0a1add2e84d515cef4e8aeb77dc43beba4d0c3fdeb871f4fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.678212 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.693984 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:13 crc kubenswrapper[4824]: E0224 00:08:13.694156 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.700703 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.717052 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98z42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a648113f-3e46-4170-ba30-7155fefbb413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98z42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.744083 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23aab44a-c0da-4344-b8b1-7c754b00d6d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bbd80285850cb217ec0994ab8841efb660fd2488077ee4968b0c1f5d156fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7244717b123b325ac83f68b976c1e5761a76b1aacac6aab471d0b644386251d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ce8e051c69710768e7b4fba06cc91026ce4d25baf35cd8f9235bfd348a4451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://837b5de339793bc363199eae6f141038bf069b2832f6375a8e01d49bef7ea63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ede0889b30698004a8f288b9cc1bb7d00b194c21ff1936b2743f1e7f246e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9c6e044265812e8e2116a81efcc38023cc8da2b92093e3f75c6efa4e359c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9c6e044265812e8e2116a81efcc38023cc8da2b92093e3f75c6efa4e359c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d404a30b7d0515ae872dab9ec3e60629de532d6f27ab0c9f1e7e3918bad30e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d404a30b7d0515ae872dab9ec3e60629de532d6f27ab0c9f1e7e3918bad30e60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://513b48315443b87dc828225545ae02e1f47580596fc31a7935dd59b80065d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://513b48315443b87dc828225545ae02e1f47580596fc31a7935dd59b80065d26c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.765177 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 05:55:08.561536232 +0000 UTC Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.765435 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.783463 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.802833 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.822444 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39d567bc7ae642dd15ce2ea8650e1aa0fd3688e3c3c061ad145edf2bf963c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:08:11Z\\\",\\\"message\\\":\\\"2026-02-24T00:07:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_61fbc54a-734f-43d4-9941-0e38fbccf016\\\\n2026-02-24T00:07:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_61fbc54a-734f-43d4-9941-0e38fbccf016 to /host/opt/cni/bin/\\\\n2026-02-24T00:07:26Z [verbose] multus-daemon started\\\\n2026-02-24T00:07:26Z [verbose] Readiness Indicator file check\\\\n2026-02-24T00:08:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:13 crc kubenswrapper[4824]: I0224 00:08:13.845125 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:53Z\\\",\\\"message\\\":\\\"61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0224 00:07:53.258981 6959 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-98z42] creating logical port openshift-multus_network-metrics-daemon-98z42 for pod on switch crc\\\\nI0224 00:07:53.258997 6959 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh in node crc\\\\nI0224 00:07:53.259010 6959 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh after 0 failed attempt(s)\\\\nI0224 00:07:53.259021 6959 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh\\\\nI0224 00:07:53.258757 6959 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-vcbgn in node crc\\\\nI0224 00:07:53.259038 6959 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-vcbgn after 0 failed attempt(s)\\\\nI0224 00:07:53.259043 6959 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-vcbgn\\\\nI0224 00:07:53.259043 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:14 crc kubenswrapper[4824]: I0224 00:08:14.693851 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:14 crc kubenswrapper[4824]: E0224 00:08:14.694051 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:14 crc kubenswrapper[4824]: I0224 00:08:14.693877 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:14 crc kubenswrapper[4824]: I0224 00:08:14.694187 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:14 crc kubenswrapper[4824]: E0224 00:08:14.694351 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:14 crc kubenswrapper[4824]: E0224 00:08:14.694433 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:14 crc kubenswrapper[4824]: I0224 00:08:14.765657 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:26:57.43370022 +0000 UTC Feb 24 00:08:15 crc kubenswrapper[4824]: I0224 00:08:15.693747 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:15 crc kubenswrapper[4824]: E0224 00:08:15.693955 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:15 crc kubenswrapper[4824]: I0224 00:08:15.766158 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 05:04:49.336824986 +0000 UTC Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.692776 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.692808 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.692835 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:16 crc kubenswrapper[4824]: E0224 00:08:16.693648 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:16 crc kubenswrapper[4824]: E0224 00:08:16.693761 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:16 crc kubenswrapper[4824]: E0224 00:08:16.693854 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.708097 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98z42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a648113f-3e46-4170-ba30-7155fefbb413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98z42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.720510 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17795ce3-5917-4f02-a513-de53b7c702cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd198d5e4adeaa8e23f5262bc17476b82b1c76b3bfd06b385590eede8c0baa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370e38e80ea6eb0a1add2e84d515cef4e8aeb77dc43beba4d0c3fdeb871f4fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370e38e80ea6eb0a1add2e84d515cef4e8aeb77dc43beba4d0c3fdeb871f4fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.738180 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.753040 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.766809 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 10:50:56.061301222 +0000 UTC Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.768746 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:16 crc kubenswrapper[4824]: E0224 00:08:16.784204 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.789251 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39d567bc7ae642dd15ce2ea8650e1aa0fd3688e3c3c061ad145edf2bf963c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:08:11Z\\\",\\\"message\\\":\\\"2026-02-24T00:07:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_61fbc54a-734f-43d4-9941-0e38fbccf016\\\\n2026-02-24T00:07:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_61fbc54a-734f-43d4-9941-0e38fbccf016 to /host/opt/cni/bin/\\\\n2026-02-24T00:07:26Z [verbose] multus-daemon started\\\\n2026-02-24T00:07:26Z [verbose] Readiness Indicator file check\\\\n2026-02-24T00:08:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.814516 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:53Z\\\",\\\"message\\\":\\\"61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0224 00:07:53.258981 6959 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-98z42] creating logical port openshift-multus_network-metrics-daemon-98z42 for pod on switch crc\\\\nI0224 00:07:53.258997 6959 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh in node crc\\\\nI0224 00:07:53.259010 6959 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh after 0 failed attempt(s)\\\\nI0224 00:07:53.259021 6959 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh\\\\nI0224 00:07:53.258757 6959 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-vcbgn in node crc\\\\nI0224 00:07:53.259038 6959 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-vcbgn after 0 failed attempt(s)\\\\nI0224 00:07:53.259043 6959 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-vcbgn\\\\nI0224 00:07:53.259043 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.835557 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23aab44a-c0da-4344-b8b1-7c754b00d6d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bbd80285850cb217ec0994ab8841efb660fd2488077ee4968b0c1f5d156fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7244717b123b325ac83f68b976c1e5761a76b1aacac6aab471d0b644386251d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ce8e051c69710768e7b4fba06cc91026ce4d25baf35cd8f9235bfd348a4451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://837b5de339793bc363199eae6f141038bf069b2832f6375a8e01d49bef7ea63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ede0889b30698004a8f288b9cc1bb7d00b194c21ff1936b2743f1e7f246e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9c6e044265812e8e2116a81efcc38023cc8da2b92093e3f75c6efa4e359c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9c6e044265812e8e2116a81efcc38023cc8da2b92093e3f75c6efa4e359c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d404a30b7d0515ae872dab9ec3e60629de532d6f27ab0c9f1e7e3918bad30e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d404a30b7d0515ae872dab9ec3e60629de532d6f27ab0c9f1e7e3918bad30e60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://513b48315443b87dc828225545ae02e1f47580596fc31a7935dd59b80065d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://513b48315443b87dc828225545ae02e1f47580596fc31a7935dd59b80065d26c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.850737 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.863552 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.876103 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.890591 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.903858 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.916783 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0525cd89-44e0-47f1-856c-f566eb21596a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://391a3f09eb431d4cc4af8eba60855591dd9fe90774ac0ecbf8ed112d41146fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1d96e1625c71100e773b21982fde152e58d48d96841da879d6ab878342b1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbxhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.932049 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56aba328-bba3-447a-9fc1-9ed04311acf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd599f0dec3d074b172aab55b3b74283b37d4dddc2e6a97b7319e8d7bd15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://357f08b7a73fd26e3a3c8232ff5641ec70da20551310ce9348d3b1462a30735c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52d213e230e92cadfb65d0efd8b4dea058e9f947897ad48c9c9d123ed37263ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.945901 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.958362 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.974165 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc53949c-a6a4-48c1-a312-cc8d39a3238f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://706813f751a53e8e88d84668245bb4b94bc39d6b611f0fed9f774c226dc8c632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fe1cb1e114c1440630a33e3dd127af3600c520e37148cc01ab2e4ed346941ba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:33Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:06:03.976490 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:06:03.977317 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:06:03.977962 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:06:03.978570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:33.531382 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:33.531507 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:33Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:03Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45746da193a3aaa64e64067e2324afb4e4fb1a90a1357461ed97c0bbb5109f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf86745569172505df6632421bd1587317cb06e26e70937563bd7d0341c6086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213ad3b341ace3f4473aa48ecaaa41814fe670417431f6d5cd04be03482e597c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:16 crc kubenswrapper[4824]: I0224 00:08:16.990701 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:17 crc kubenswrapper[4824]: I0224 00:08:17.693617 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:17 crc kubenswrapper[4824]: E0224 00:08:17.693835 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:17 crc kubenswrapper[4824]: I0224 00:08:17.767067 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 11:32:04.689457131 +0000 UTC Feb 24 00:08:18 crc kubenswrapper[4824]: I0224 00:08:18.693760 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:18 crc kubenswrapper[4824]: I0224 00:08:18.693906 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:18 crc kubenswrapper[4824]: I0224 00:08:18.694387 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:18 crc kubenswrapper[4824]: E0224 00:08:18.694581 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:18 crc kubenswrapper[4824]: E0224 00:08:18.694648 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:18 crc kubenswrapper[4824]: E0224 00:08:18.694691 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:18 crc kubenswrapper[4824]: I0224 00:08:18.695165 4824 scope.go:117] "RemoveContainer" containerID="76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba" Feb 24 00:08:18 crc kubenswrapper[4824]: I0224 00:08:18.767946 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 23:16:59.566175688 +0000 UTC Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.515954 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovnkube-controller/2.log" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.518084 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerStarted","Data":"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac"} Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.519202 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.534228 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc53949c-a6a4-48c1-a312-cc8d39a3238f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://706813f751a53e8e88d84668245bb4b94bc39d6b611f0fed9f774c226dc8c632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fe1cb1e114c1440630a33e3dd127af3600c520e37148cc01ab2e4ed346941ba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:33Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:06:03.976490 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:06:03.977317 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:06:03.977962 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:06:03.978570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:33.531382 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:33.531507 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:33Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:03Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45746da193a3aaa64e64067e2324afb4e4fb1a90a1357461ed97c0bbb5109f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf86745569172505df6632421bd1587317cb06e26e70937563bd7d0341c6086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213ad3b341ace3f4473aa48ecaaa41814fe670417431f6d5cd04be03482e597c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.549614 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.564681 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.576175 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98z42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a648113f-3e46-4170-ba30-7155fefbb413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98z42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.587726 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17795ce3-5917-4f02-a513-de53b7c702cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd198d5e4adeaa8e23f5262bc17476b82b1c76b3bfd06b385590eede8c0baa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370e38e80ea6eb0a1add2e84d515cef4e8aeb77dc43beba4d0c3fdeb871f4fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370e38e80ea6eb0a1add2e84d515cef4e8aeb77dc43beba4d0c3fdeb871f4fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.601571 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.614618 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.638799 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.653921 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39d567bc7ae642dd15ce2ea8650e1aa0fd3688e3c3c061ad145edf2bf963c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:08:11Z\\\",\\\"message\\\":\\\"2026-02-24T00:07:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_61fbc54a-734f-43d4-9941-0e38fbccf016\\\\n2026-02-24T00:07:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_61fbc54a-734f-43d4-9941-0e38fbccf016 to /host/opt/cni/bin/\\\\n2026-02-24T00:07:26Z [verbose] multus-daemon started\\\\n2026-02-24T00:07:26Z [verbose] Readiness Indicator file check\\\\n2026-02-24T00:08:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.672585 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:53Z\\\",\\\"message\\\":\\\"61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0224 00:07:53.258981 6959 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-98z42] creating logical port openshift-multus_network-metrics-daemon-98z42 for pod on switch crc\\\\nI0224 00:07:53.258997 6959 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh in node crc\\\\nI0224 00:07:53.259010 6959 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh after 0 failed attempt(s)\\\\nI0224 00:07:53.259021 6959 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh\\\\nI0224 00:07:53.258757 6959 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-vcbgn in node crc\\\\nI0224 00:07:53.259038 6959 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-vcbgn after 0 failed attempt(s)\\\\nI0224 00:07:53.259043 6959 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-vcbgn\\\\nI0224 00:07:53.259043 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.693014 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:19 crc kubenswrapper[4824]: E0224 00:08:19.693155 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.693125 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23aab44a-c0da-4344-b8b1-7c754b00d6d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bbd80285850cb217ec0994ab8841efb660fd2488077ee4968b0c1f5d156fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7244717b123b325ac83f68b976c1e5761a76b1aacac6aab471d0b644386251d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ce8e051c69710768e7b4fba06cc91026ce4d25baf35cd8f9235bfd348a4451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://837b5de339793bc363199eae6f141038bf069b2832f6375a8e01d49bef7ea63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ede0889b30698004a8f288b9cc1bb7d00b194c21ff1936b2743f1e7f246e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9c6e044265812e8e2116a81efcc38023cc8da2b92093e3f75c6efa4e359c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9c6e044265812e8e2116a81efcc38023cc8da2b92093e3f75c6efa4e359c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d404a30b7d0515ae872dab9ec3e60629de532d6f27ab0c9f1e7e3918bad30e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d404a30b7d0515ae872dab9ec3e60629de532d6f27ab0c9f1e7e3918bad30e60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://513b48315443b87dc828225545ae02e1f47580596fc31a7935dd59b80065d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://513b48315443b87dc828225545ae02e1f47580596fc31a7935dd59b80065d26c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.707390 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.719096 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.731218 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.751467 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.764280 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.768139 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 10:58:36.974345751 +0000 UTC Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.776632 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0525cd89-44e0-47f1-856c-f566eb21596a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://391a3f09eb431d4cc4af8eba60855591dd9fe90774ac0ecbf8ed112d41146fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1d96e1625c71100e773b21982fde152e58d48d96841da879d6ab878342b1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbxhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.789588 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56aba328-bba3-447a-9fc1-9ed04311acf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd599f0dec3d074b172aab55b3b74283b37d4dddc2e6a97b7319e8d7bd15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://357f08b7a73fd26e3a3c8232ff5641ec70da20551310ce9348d3b1462a30735c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52d213e230e92cadfb65d0efd8b4dea058e9f947897ad48c9c9d123ed37263ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:19 crc kubenswrapper[4824]: I0224 00:08:19.801905 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:19Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.524004 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovnkube-controller/3.log" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.525219 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovnkube-controller/2.log" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.529468 4824 generic.go:334] "Generic (PLEG): container finished" podID="d985b875-dd5e-4767-a4e2-209894575a8f" containerID="5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac" exitCode=1 Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.529562 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerDied","Data":"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac"} Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.529618 4824 scope.go:117] "RemoveContainer" containerID="76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.530774 4824 scope.go:117] "RemoveContainer" containerID="5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac" Feb 24 00:08:20 crc kubenswrapper[4824]: E0224 00:08:20.531169 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.550700 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56aba328-bba3-447a-9fc1-9ed04311acf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd599f0dec3d074b172aab55b3b74283b37d4dddc2e6a97b7319e8d7bd15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://357f08b7a73fd26e3a3c8232ff5641ec70da20551310ce9348d3b1462a30735c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52d213e230e92cadfb65d0efd8b4dea058e9f947897ad48c9c9d123ed37263ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.568739 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.583457 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.600611 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.619794 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.635307 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.650454 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0525cd89-44e0-47f1-856c-f566eb21596a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://391a3f09eb431d4cc4af8eba60855591dd9fe90774ac0ecbf8ed112d41146fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1d96e1625c71100e773b21982fde152e58d48d96841da879d6ab878342b1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbxhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.667709 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc53949c-a6a4-48c1-a312-cc8d39a3238f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://706813f751a53e8e88d84668245bb4b94bc39d6b611f0fed9f774c226dc8c632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fe1cb1e114c1440630a33e3dd127af3600c520e37148cc01ab2e4ed346941ba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:33Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:06:03.976490 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:06:03.977317 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:06:03.977962 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:06:03.978570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:33.531382 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:33.531507 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:33Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:03Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45746da193a3aaa64e64067e2324afb4e4fb1a90a1357461ed97c0bbb5109f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf86745569172505df6632421bd1587317cb06e26e70937563bd7d0341c6086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213ad3b341ace3f4473aa48ecaaa41814fe670417431f6d5cd04be03482e597c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.691365 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.693729 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.693844 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.693925 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:20 crc kubenswrapper[4824]: E0224 00:08:20.693862 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:20 crc kubenswrapper[4824]: E0224 00:08:20.694127 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:20 crc kubenswrapper[4824]: E0224 00:08:20.694201 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.707383 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17795ce3-5917-4f02-a513-de53b7c702cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd198d5e4adeaa8e23f5262bc17476b82b1c76b3bfd06b385590eede8c0baa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370e38e80ea6eb0a1add2e84d515cef4e8aeb77dc43beba4d0c3fdeb871f4fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370e38e80ea6eb0a1add2e84d515cef4e8aeb77dc43beba4d0c3fdeb871f4fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.725729 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.740131 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.755293 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98z42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a648113f-3e46-4170-ba30-7155fefbb413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98z42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.769097 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 09:08:58.740155317 +0000 UTC Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.779714 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23aab44a-c0da-4344-b8b1-7c754b00d6d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bbd80285850cb217ec0994ab8841efb660fd2488077ee4968b0c1f5d156fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7244717b123b325ac83f68b976c1e5761a76b1aacac6aab471d0b644386251d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ce8e051c69710768e7b4fba06cc91026ce4d25baf35cd8f9235bfd348a4451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://837b5de339793bc363199eae6f141038bf069b2832f6375a8e01d49bef7ea63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ede0889b30698004a8f288b9cc1bb7d00b194c21ff1936b2743f1e7f246e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9c6e044265812e8e2116a81efcc38023cc8da2b92093e3f75c6efa4e359c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9c6e044265812e8e2116a81efcc38023cc8da2b92093e3f75c6efa4e359c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d404a30b7d0515ae872dab9ec3e60629de532d6f27ab0c9f1e7e3918bad30e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d404a30b7d0515ae872dab9ec3e60629de532d6f27ab0c9f1e7e3918bad30e60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://513b48315443b87dc828225545ae02e1f47580596fc31a7935dd59b80065d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://513b48315443b87dc828225545ae02e1f47580596fc31a7935dd59b80065d26c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.797451 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.815456 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.831658 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.853509 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39d567bc7ae642dd15ce2ea8650e1aa0fd3688e3c3c061ad145edf2bf963c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:08:11Z\\\",\\\"message\\\":\\\"2026-02-24T00:07:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_61fbc54a-734f-43d4-9941-0e38fbccf016\\\\n2026-02-24T00:07:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_61fbc54a-734f-43d4-9941-0e38fbccf016 to /host/opt/cni/bin/\\\\n2026-02-24T00:07:26Z [verbose] multus-daemon started\\\\n2026-02-24T00:07:26Z [verbose] Readiness Indicator file check\\\\n2026-02-24T00:08:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:20 crc kubenswrapper[4824]: I0224 00:08:20.874880 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76341c8e4fd3e6aaeb74b20155da61acd373c362ebccb81a83847b8aa813c8ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:53Z\\\",\\\"message\\\":\\\"61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0224 00:07:53.258981 6959 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-98z42] creating logical port openshift-multus_network-metrics-daemon-98z42 for pod on switch crc\\\\nI0224 00:07:53.258997 6959 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh in node crc\\\\nI0224 00:07:53.259010 6959 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh after 0 failed attempt(s)\\\\nI0224 00:07:53.259021 6959 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh\\\\nI0224 00:07:53.258757 6959 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-vcbgn in node crc\\\\nI0224 00:07:53.259038 6959 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-vcbgn after 0 failed attempt(s)\\\\nI0224 00:07:53.259043 6959 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-vcbgn\\\\nI0224 00:07:53.259043 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:08:19Z\\\",\\\"message\\\":\\\"emplate:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0224 00:08:19.714118 7289 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0224 00:08:19.714202 7289 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped alrea\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:20Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.535295 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovnkube-controller/3.log" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.538701 4824 scope.go:117] "RemoveContainer" containerID="5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac" Feb 24 00:08:21 crc kubenswrapper[4824]: E0224 00:08:21.538895 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.557110 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.568602 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.579924 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0525cd89-44e0-47f1-856c-f566eb21596a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://391a3f09eb431d4cc4af8eba60855591dd9fe90774ac0ecbf8ed112d41146fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1d96e1625c71100e773b21982fde152e58d48d96841da879d6ab878342b1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbxhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.594561 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56aba328-bba3-447a-9fc1-9ed04311acf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd599f0dec3d074b172aab55b3b74283b37d4dddc2e6a97b7319e8d7bd15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://357f08b7a73fd26e3a3c8232ff5641ec70da20551310ce9348d3b1462a30735c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52d213e230e92cadfb65d0efd8b4dea058e9f947897ad48c9c9d123ed37263ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.609379 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.621241 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.633993 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.647472 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc53949c-a6a4-48c1-a312-cc8d39a3238f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://706813f751a53e8e88d84668245bb4b94bc39d6b611f0fed9f774c226dc8c632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fe1cb1e114c1440630a33e3dd127af3600c520e37148cc01ab2e4ed346941ba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:33Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:06:03.976490 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:06:03.977317 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:06:03.977962 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:06:03.978570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:33.531382 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:33.531507 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:33Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:03Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45746da193a3aaa64e64067e2324afb4e4fb1a90a1357461ed97c0bbb5109f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf86745569172505df6632421bd1587317cb06e26e70937563bd7d0341c6086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213ad3b341ace3f4473aa48ecaaa41814fe670417431f6d5cd04be03482e597c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.660761 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.672279 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17795ce3-5917-4f02-a513-de53b7c702cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd198d5e4adeaa8e23f5262bc17476b82b1c76b3bfd06b385590eede8c0baa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370e38e80ea6eb0a1add2e84d515cef4e8aeb77dc43beba4d0c3fdeb871f4fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370e38e80ea6eb0a1add2e84d515cef4e8aeb77dc43beba4d0c3fdeb871f4fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.687933 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.692758 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:21 crc kubenswrapper[4824]: E0224 00:08:21.692930 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.700675 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.713796 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98z42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a648113f-3e46-4170-ba30-7155fefbb413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98z42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.732075 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39d567bc7ae642dd15ce2ea8650e1aa0fd3688e3c3c061ad145edf2bf963c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:08:11Z\\\",\\\"message\\\":\\\"2026-02-24T00:07:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_61fbc54a-734f-43d4-9941-0e38fbccf016\\\\n2026-02-24T00:07:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_61fbc54a-734f-43d4-9941-0e38fbccf016 to /host/opt/cni/bin/\\\\n2026-02-24T00:07:26Z [verbose] multus-daemon started\\\\n2026-02-24T00:07:26Z [verbose] Readiness Indicator file check\\\\n2026-02-24T00:08:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.751362 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:08:19Z\\\",\\\"message\\\":\\\"emplate:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0224 00:08:19.714118 7289 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0224 00:08:19.714202 7289 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped alrea\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.769685 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 04:07:30.45471987 +0000 UTC Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.770180 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23aab44a-c0da-4344-b8b1-7c754b00d6d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bbd80285850cb217ec0994ab8841efb660fd2488077ee4968b0c1f5d156fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7244717b123b325ac83f68b976c1e5761a76b1aacac6aab471d0b644386251d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ce8e051c69710768e7b4fba06cc91026ce4d25baf35cd8f9235bfd348a4451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://837b5de339793bc363199eae6f141038bf069b2832f6375a8e01d49bef7ea63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ede0889b30698004a8f288b9cc1bb7d00b194c21ff1936b2743f1e7f246e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9c6e044265812e8e2116a81efcc38023cc8da2b92093e3f75c6efa4e359c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9c6e044265812e8e2116a81efcc38023cc8da2b92093e3f75c6efa4e359c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d404a30b7d0515ae872dab9ec3e60629de532d6f27ab0c9f1e7e3918bad30e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d404a30b7d0515ae872dab9ec3e60629de532d6f27ab0c9f1e7e3918bad30e60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://513b48315443b87dc828225545ae02e1f47580596fc31a7935dd59b80065d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://513b48315443b87dc828225545ae02e1f47580596fc31a7935dd59b80065d26c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.784750 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: E0224 00:08:21.785408 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.803039 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:21 crc kubenswrapper[4824]: I0224 00:08:21.821150 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:21Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.693104 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.693187 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.693187 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:22 crc kubenswrapper[4824]: E0224 00:08:22.693326 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:22 crc kubenswrapper[4824]: E0224 00:08:22.693479 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:22 crc kubenswrapper[4824]: E0224 00:08:22.693632 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.770367 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:49:59.805144189 +0000 UTC Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.819829 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.819861 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.819872 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.819886 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.819894 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:22Z","lastTransitionTime":"2026-02-24T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:22 crc kubenswrapper[4824]: E0224 00:08:22.832919 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.837225 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.837258 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.837266 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.837282 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.837292 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:22Z","lastTransitionTime":"2026-02-24T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:22 crc kubenswrapper[4824]: E0224 00:08:22.854903 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.858193 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.858237 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.858251 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.858274 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.858289 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:22Z","lastTransitionTime":"2026-02-24T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:22 crc kubenswrapper[4824]: E0224 00:08:22.871865 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.876137 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.876181 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.876194 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.876213 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.876225 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:22Z","lastTransitionTime":"2026-02-24T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:22 crc kubenswrapper[4824]: E0224 00:08:22.893661 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.897313 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.897354 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.897365 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.897383 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:22 crc kubenswrapper[4824]: I0224 00:08:22.897393 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:22Z","lastTransitionTime":"2026-02-24T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:22 crc kubenswrapper[4824]: E0224 00:08:22.911067 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:22 crc kubenswrapper[4824]: E0224 00:08:22.911181 4824 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:08:23 crc kubenswrapper[4824]: I0224 00:08:23.693389 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:23 crc kubenswrapper[4824]: E0224 00:08:23.693629 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:23 crc kubenswrapper[4824]: I0224 00:08:23.771635 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:01:22.567733481 +0000 UTC Feb 24 00:08:24 crc kubenswrapper[4824]: I0224 00:08:24.693625 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:24 crc kubenswrapper[4824]: I0224 00:08:24.693638 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:24 crc kubenswrapper[4824]: E0224 00:08:24.694225 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:24 crc kubenswrapper[4824]: I0224 00:08:24.693696 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:24 crc kubenswrapper[4824]: E0224 00:08:24.694351 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:24 crc kubenswrapper[4824]: E0224 00:08:24.694424 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:24 crc kubenswrapper[4824]: I0224 00:08:24.772580 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 11:38:23.97220231 +0000 UTC Feb 24 00:08:25 crc kubenswrapper[4824]: I0224 00:08:25.693388 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:25 crc kubenswrapper[4824]: E0224 00:08:25.693587 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:25 crc kubenswrapper[4824]: I0224 00:08:25.773137 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 02:34:53.285288771 +0000 UTC Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.693839 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.693932 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:26 crc kubenswrapper[4824]: E0224 00:08:26.694067 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:26 crc kubenswrapper[4824]: E0224 00:08:26.694192 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.693857 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:26 crc kubenswrapper[4824]: E0224 00:08:26.694865 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.719058 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23aab44a-c0da-4344-b8b1-7c754b00d6d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65bbd80285850cb217ec0994ab8841efb660fd2488077ee4968b0c1f5d156fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7244717b123b325ac83f68b976c1e5761a76b1aacac6aab471d0b644386251d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ce8e051c69710768e7b4fba06cc91026ce4d25baf35cd8f9235bfd348a4451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://837b5de339793bc363199eae6f141038bf069b2832f6375a8e01d49bef7ea63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ede0889b30698004a8f288b9cc1bb7d00b194c21ff1936b2743f1e7f246e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9c6e044265812e8e2116a81efcc38023cc8da2b92093e3f75c6efa4e359c4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac9c6e044265812e8e2116a81efcc38023cc8da2b92093e3f75c6efa4e359c4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d404a30b7d0515ae872dab9ec3e60629de532d6f27ab0c9f1e7e3918bad30e60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d404a30b7d0515ae872dab9ec3e60629de532d6f27ab0c9f1e7e3918bad30e60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://513b48315443b87dc828225545ae02e1f47580596fc31a7935dd59b80065d26c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://513b48315443b87dc828225545ae02e1f47580596fc31a7935dd59b80065d26c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.737875 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5209eb40-b077-42f2-9239-27bf1cde0e05\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"message\\\":\\\"W0224 00:06:35.979734 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:35.980117 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891595 cert, and key in /tmp/serving-cert-1325703996/serving-signer.crt, /tmp/serving-cert-1325703996/serving-signer.key\\\\nI0224 00:06:36.313594 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:36.323413 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0224 00:06:36.325040 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:36.326853 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1325703996/tls.crt::/tmp/serving-cert-1325703996/tls.key\\\\\\\"\\\\nF0224 00:06:36.787238 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.754711 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684e1bbbe61030fdd75db917dbf9ac4fedf392502a653c8f4600dc829410aa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://983e13c7b9d6d0b8183250aedfe93ec3ac42de4dae305c03caa9f653082bb329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.775665 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 14:47:10.341864609 +0000 UTC Feb 24 00:08:26 crc kubenswrapper[4824]: E0224 00:08:26.786059 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.797853 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.821795 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wvqfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a39d567bc7ae642dd15ce2ea8650e1aa0fd3688e3c3c061ad145edf2bf963c06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:08:11Z\\\",\\\"message\\\":\\\"2026-02-24T00:07:26+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_61fbc54a-734f-43d4-9941-0e38fbccf016\\\\n2026-02-24T00:07:26+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_61fbc54a-734f-43d4-9941-0e38fbccf016 to /host/opt/cni/bin/\\\\n2026-02-24T00:07:26Z [verbose] multus-daemon started\\\\n2026-02-24T00:07:26Z [verbose] Readiness Indicator file check\\\\n2026-02-24T00:08:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjnn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wvqfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.847334 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d985b875-dd5e-4767-a4e2-209894575a8f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:08:19Z\\\",\\\"message\\\":\\\"emplate:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0224 00:08:19.714118 7289 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0224 00:08:19.714202 7289 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped alrea\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:08:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x6rnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4xjg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.867988 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0525cd89-44e0-47f1-856c-f566eb21596a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://391a3f09eb431d4cc4af8eba60855591dd9fe90774ac0ecbf8ed112d41146fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f1d96e1625c71100e773b21982fde152e58d48d96841da879d6ab878342b1e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5smpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbxhh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.880949 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56aba328-bba3-447a-9fc1-9ed04311acf5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd599f0dec3d074b172aab55b3b74283b37d4dddc2e6a97b7319e8d7bd15b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://357f08b7a73fd26e3a3c8232ff5641ec70da20551310ce9348d3b1462a30735c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52d213e230e92cadfb65d0efd8b4dea058e9f947897ad48c9c9d123ed37263ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1580132fb7d7b9203017c471a27257d03282c1b3dd97167f647ecd3848ba69a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.893198 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b147bde27310be40a9d49e2254ff858c03a32602bad1de273272b7b3292f057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.904682 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-nwxht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea7ca84e-cb8c-4bdf-ba10-bd2aca09bdaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69860166bff170380f7baa9e9b405900881403e17fbb017db27e9136e1c3696e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxc9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-nwxht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.917773 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"939ca085-9383-42e6-b7d6-37f101137273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2cc64aac8c2354ba053a22c4c5cde7371a99c7aaaece1fe2dd2f7b41eceb4e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j6f4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vcbgn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.935370 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d64vq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28309e58-76b2-4fe6-a1e5-569b6f0b3a5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b16a40794ba6c7bfbe672f5438284c9479efc8d1fe6a87b8292b3595799772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002f92646d054b312160a3b59900255e35829570bf3dbef9c871b6772f3628f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e7a337a9d263f11a833a233e7ad735468a159be0fe3a62e21d72ae816355a80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e69c7a6d07cb824c7aa646720b7d0a6563a3fe4de8af9bfcf85b8a4093df9154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6cd5818c073a3dabff4b4b5d2cc02a40b8a722c642730ea29e07f840db9afc50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43d152897b22f88ea3328011ed0b921e0f33f5c0375f4af98b424da48c20a268\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86a454258a9043824fa0ce318ad33be2fc4c029f75f0fc489b22716d69d67d45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ttlvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d64vq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.947069 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-2zsq6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bffd69c2-56a8-4fa0-9fbf-82a508f80ec1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4215a2974f4f4d1ab7b093c8d457f1f450276c89565a95ea13f1d42db9f07afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkj25\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-2zsq6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.960737 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc53949c-a6a4-48c1-a312-cc8d39a3238f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://706813f751a53e8e88d84668245bb4b94bc39d6b611f0fed9f774c226dc8c632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fe1cb1e114c1440630a33e3dd127af3600c520e37148cc01ab2e4ed346941ba\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:33Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:06:03.976490 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:06:03.977317 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:06:03.977962 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:06:03.978570 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:33.531382 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:33.531507 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:33Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:03Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45746da193a3aaa64e64067e2324afb4e4fb1a90a1357461ed97c0bbb5109f5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf86745569172505df6632421bd1587317cb06e26e70937563bd7d0341c6086\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://213ad3b341ace3f4473aa48ecaaa41814fe670417431f6d5cd04be03482e597c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.975443 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc76b60d9c2e639d3340a172e70670a434008774f3ec5751dae2ffc78a913486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:26 crc kubenswrapper[4824]: I0224 00:08:26.986552 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17795ce3-5917-4f02-a513-de53b7c702cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdd198d5e4adeaa8e23f5262bc17476b82b1c76b3bfd06b385590eede8c0baa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370e38e80ea6eb0a1add2e84d515cef4e8aeb77dc43beba4d0c3fdeb871f4fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370e38e80ea6eb0a1add2e84d515cef4e8aeb77dc43beba4d0c3fdeb871f4fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:27 crc kubenswrapper[4824]: I0224 00:08:27.000868 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:27 crc kubenswrapper[4824]: I0224 00:08:27.014306 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:27 crc kubenswrapper[4824]: I0224 00:08:27.025843 4824 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-98z42" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a648113f-3e46-4170-ba30-7155fefbb413\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svh6v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-98z42\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:27 crc kubenswrapper[4824]: I0224 00:08:27.693618 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:27 crc kubenswrapper[4824]: E0224 00:08:27.693769 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:27 crc kubenswrapper[4824]: I0224 00:08:27.776356 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 17:26:08.893577574 +0000 UTC Feb 24 00:08:28 crc kubenswrapper[4824]: I0224 00:08:28.693881 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:28 crc kubenswrapper[4824]: I0224 00:08:28.693979 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:28 crc kubenswrapper[4824]: I0224 00:08:28.693910 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:28 crc kubenswrapper[4824]: E0224 00:08:28.694174 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:28 crc kubenswrapper[4824]: E0224 00:08:28.694266 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:28 crc kubenswrapper[4824]: E0224 00:08:28.694408 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:28 crc kubenswrapper[4824]: I0224 00:08:28.776595 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:47:20.482676861 +0000 UTC Feb 24 00:08:29 crc kubenswrapper[4824]: I0224 00:08:29.693598 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:29 crc kubenswrapper[4824]: E0224 00:08:29.693768 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:29 crc kubenswrapper[4824]: I0224 00:08:29.777539 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:14:48.631276644 +0000 UTC Feb 24 00:08:30 crc kubenswrapper[4824]: I0224 00:08:30.693336 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:30 crc kubenswrapper[4824]: I0224 00:08:30.693431 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:30 crc kubenswrapper[4824]: I0224 00:08:30.693501 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:30 crc kubenswrapper[4824]: E0224 00:08:30.693903 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:30 crc kubenswrapper[4824]: E0224 00:08:30.694149 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:30 crc kubenswrapper[4824]: E0224 00:08:30.694386 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:30 crc kubenswrapper[4824]: I0224 00:08:30.778490 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:21:49.045162605 +0000 UTC Feb 24 00:08:31 crc kubenswrapper[4824]: I0224 00:08:31.693094 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:31 crc kubenswrapper[4824]: E0224 00:08:31.693346 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:31 crc kubenswrapper[4824]: I0224 00:08:31.779157 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:32:35.110024379 +0000 UTC Feb 24 00:08:31 crc kubenswrapper[4824]: E0224 00:08:31.787178 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:08:32 crc kubenswrapper[4824]: I0224 00:08:32.694215 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:32 crc kubenswrapper[4824]: E0224 00:08:32.694381 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:32 crc kubenswrapper[4824]: I0224 00:08:32.694290 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:32 crc kubenswrapper[4824]: E0224 00:08:32.694456 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:32 crc kubenswrapper[4824]: I0224 00:08:32.694277 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:32 crc kubenswrapper[4824]: E0224 00:08:32.694512 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:32 crc kubenswrapper[4824]: I0224 00:08:32.780215 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 10:10:06.550306153 +0000 UTC Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.148362 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.148404 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.148414 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.148435 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.148446 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:33Z","lastTransitionTime":"2026-02-24T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:33 crc kubenswrapper[4824]: E0224 00:08:33.168213 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.176784 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.176857 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.176990 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.177804 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.177823 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:33Z","lastTransitionTime":"2026-02-24T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:33 crc kubenswrapper[4824]: E0224 00:08:33.193949 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.198551 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.198591 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.198603 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.198622 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.198634 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:33Z","lastTransitionTime":"2026-02-24T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:33 crc kubenswrapper[4824]: E0224 00:08:33.214000 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.217533 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.217574 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.217583 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.217602 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.217613 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:33Z","lastTransitionTime":"2026-02-24T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:33 crc kubenswrapper[4824]: E0224 00:08:33.231767 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.236663 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.236698 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.236708 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.236727 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.236743 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:33Z","lastTransitionTime":"2026-02-24T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:33 crc kubenswrapper[4824]: E0224 00:08:33.250899 4824 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:08:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7ea41d01-04ab-44da-af10-993e94777268\\\",\\\"systemUUID\\\":\\\"d5e3d68d-d538-4dbe-b3fe-7347ab36b29a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:08:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:08:33 crc kubenswrapper[4824]: E0224 00:08:33.251049 4824 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.693165 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:33 crc kubenswrapper[4824]: E0224 00:08:33.693416 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:33 crc kubenswrapper[4824]: I0224 00:08:33.780883 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 10:57:54.168614833 +0000 UTC Feb 24 00:08:34 crc kubenswrapper[4824]: I0224 00:08:34.693146 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:34 crc kubenswrapper[4824]: I0224 00:08:34.693174 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:34 crc kubenswrapper[4824]: I0224 00:08:34.693705 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:34 crc kubenswrapper[4824]: E0224 00:08:34.693911 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:34 crc kubenswrapper[4824]: E0224 00:08:34.693991 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:34 crc kubenswrapper[4824]: E0224 00:08:34.694130 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:34 crc kubenswrapper[4824]: I0224 00:08:34.694452 4824 scope.go:117] "RemoveContainer" containerID="5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac" Feb 24 00:08:34 crc kubenswrapper[4824]: E0224 00:08:34.694941 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" Feb 24 00:08:34 crc kubenswrapper[4824]: I0224 00:08:34.781591 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:22:23.613055346 +0000 UTC Feb 24 00:08:35 crc kubenswrapper[4824]: I0224 00:08:35.693577 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:35 crc kubenswrapper[4824]: E0224 00:08:35.693918 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:35 crc kubenswrapper[4824]: I0224 00:08:35.781838 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 17:25:17.474643412 +0000 UTC Feb 24 00:08:36 crc kubenswrapper[4824]: I0224 00:08:36.693453 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:36 crc kubenswrapper[4824]: E0224 00:08:36.693774 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:36 crc kubenswrapper[4824]: I0224 00:08:36.693900 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:36 crc kubenswrapper[4824]: I0224 00:08:36.694077 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:36 crc kubenswrapper[4824]: E0224 00:08:36.694668 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:36 crc kubenswrapper[4824]: E0224 00:08:36.694926 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:36 crc kubenswrapper[4824]: I0224 00:08:36.730892 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=35.730860675 podStartE2EDuration="35.730860675s" podCreationTimestamp="2026-02-24 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:36.730566657 +0000 UTC m=+180.720191156" watchObservedRunningTime="2026-02-24 00:08:36.730860675 +0000 UTC m=+180.720485184" Feb 24 00:08:36 crc kubenswrapper[4824]: I0224 00:08:36.782824 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 02:03:02.927029489 +0000 UTC Feb 24 00:08:36 crc kubenswrapper[4824]: E0224 00:08:36.787695 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:08:36 crc kubenswrapper[4824]: I0224 00:08:36.813032 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=26.812995875 podStartE2EDuration="26.812995875s" podCreationTimestamp="2026-02-24 00:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:36.810727802 +0000 UTC m=+180.800352291" watchObservedRunningTime="2026-02-24 00:08:36.812995875 +0000 UTC m=+180.802620364" Feb 24 00:08:36 crc kubenswrapper[4824]: I0224 00:08:36.834766 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.8347431 podStartE2EDuration="1m29.8347431s" podCreationTimestamp="2026-02-24 00:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:36.833506556 +0000 UTC m=+180.823131025" watchObservedRunningTime="2026-02-24 00:08:36.8347431 +0000 UTC m=+180.824367569" Feb 24 00:08:36 crc kubenswrapper[4824]: I0224 00:08:36.914320 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-wvqfl" podStartSLOduration=106.914297149 podStartE2EDuration="1m46.914297149s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:36.882243551 +0000 UTC m=+180.871868060" watchObservedRunningTime="2026-02-24 00:08:36.914297149 +0000 UTC m=+180.903921618" Feb 24 00:08:36 crc kubenswrapper[4824]: I0224 00:08:36.931173 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbxhh" podStartSLOduration=106.931144471 podStartE2EDuration="1m46.931144471s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:36.930051841 +0000 UTC m=+180.919676310" watchObservedRunningTime="2026-02-24 00:08:36.931144471 +0000 UTC m=+180.920768950" Feb 24 00:08:36 crc kubenswrapper[4824]: I0224 00:08:36.964168 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=44.964145494 podStartE2EDuration="44.964145494s" podCreationTimestamp="2026-02-24 00:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:36.948537557 +0000 UTC m=+180.938162046" watchObservedRunningTime="2026-02-24 00:08:36.964145494 +0000 UTC m=+180.953769973" Feb 24 00:08:36 crc kubenswrapper[4824]: I0224 00:08:36.977016 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-nwxht" podStartSLOduration=107.976995836 podStartE2EDuration="1m47.976995836s" podCreationTimestamp="2026-02-24 00:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:36.976649397 +0000 UTC m=+180.966273896" watchObservedRunningTime="2026-02-24 00:08:36.976995836 +0000 UTC m=+180.966620315" Feb 24 00:08:36 crc kubenswrapper[4824]: I0224 00:08:36.991231 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podStartSLOduration=106.991207906 podStartE2EDuration="1m46.991207906s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:36.991149274 +0000 UTC m=+180.980773753" watchObservedRunningTime="2026-02-24 00:08:36.991207906 +0000 UTC m=+180.980832365" Feb 24 00:08:37 crc kubenswrapper[4824]: I0224 00:08:37.012445 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-d64vq" podStartSLOduration=107.012425867 podStartE2EDuration="1m47.012425867s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:37.011969294 +0000 UTC m=+181.001593783" watchObservedRunningTime="2026-02-24 00:08:37.012425867 +0000 UTC m=+181.002050336" Feb 24 00:08:37 crc kubenswrapper[4824]: I0224 00:08:37.042794 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-2zsq6" podStartSLOduration=108.042766958 podStartE2EDuration="1m48.042766958s" podCreationTimestamp="2026-02-24 00:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:37.027617343 +0000 UTC m=+181.017241812" watchObservedRunningTime="2026-02-24 00:08:37.042766958 +0000 UTC m=+181.032391427" Feb 24 00:08:37 crc kubenswrapper[4824]: I0224 00:08:37.061096 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=41.061072629 podStartE2EDuration="41.061072629s" podCreationTimestamp="2026-02-24 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:37.04393543 +0000 UTC m=+181.033559899" watchObservedRunningTime="2026-02-24 00:08:37.061072629 +0000 UTC m=+181.050697098" Feb 24 00:08:37 crc kubenswrapper[4824]: I0224 00:08:37.692921 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:37 crc kubenswrapper[4824]: E0224 00:08:37.693099 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:37 crc kubenswrapper[4824]: I0224 00:08:37.783641 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:32:21.416339334 +0000 UTC Feb 24 00:08:38 crc kubenswrapper[4824]: I0224 00:08:38.693421 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:38 crc kubenswrapper[4824]: I0224 00:08:38.693441 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:38 crc kubenswrapper[4824]: I0224 00:08:38.693634 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:38 crc kubenswrapper[4824]: E0224 00:08:38.693839 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:38 crc kubenswrapper[4824]: E0224 00:08:38.693993 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:38 crc kubenswrapper[4824]: E0224 00:08:38.694068 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:38 crc kubenswrapper[4824]: I0224 00:08:38.784497 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 08:49:02.046724666 +0000 UTC Feb 24 00:08:39 crc kubenswrapper[4824]: I0224 00:08:39.693177 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:39 crc kubenswrapper[4824]: E0224 00:08:39.693358 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:39 crc kubenswrapper[4824]: I0224 00:08:39.785558 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 17:25:12.503278456 +0000 UTC Feb 24 00:08:40 crc kubenswrapper[4824]: I0224 00:08:40.597418 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs\") pod \"network-metrics-daemon-98z42\" (UID: \"a648113f-3e46-4170-ba30-7155fefbb413\") " pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:40 crc kubenswrapper[4824]: E0224 00:08:40.597664 4824 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:08:40 crc kubenswrapper[4824]: E0224 00:08:40.597762 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs podName:a648113f-3e46-4170-ba30-7155fefbb413 nodeName:}" failed. No retries permitted until 2026-02-24 00:09:44.597734963 +0000 UTC m=+248.587359472 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs") pod "network-metrics-daemon-98z42" (UID: "a648113f-3e46-4170-ba30-7155fefbb413") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:08:40 crc kubenswrapper[4824]: I0224 00:08:40.693862 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:40 crc kubenswrapper[4824]: E0224 00:08:40.694059 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:40 crc kubenswrapper[4824]: I0224 00:08:40.694311 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:40 crc kubenswrapper[4824]: E0224 00:08:40.694433 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:40 crc kubenswrapper[4824]: I0224 00:08:40.694686 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:40 crc kubenswrapper[4824]: E0224 00:08:40.694763 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:40 crc kubenswrapper[4824]: I0224 00:08:40.785783 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:05:07.907587907 +0000 UTC Feb 24 00:08:41 crc kubenswrapper[4824]: I0224 00:08:41.693114 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:41 crc kubenswrapper[4824]: E0224 00:08:41.693306 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:41 crc kubenswrapper[4824]: I0224 00:08:41.786217 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 00:41:56.208868043 +0000 UTC Feb 24 00:08:41 crc kubenswrapper[4824]: E0224 00:08:41.789395 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:08:42 crc kubenswrapper[4824]: I0224 00:08:42.693315 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:42 crc kubenswrapper[4824]: I0224 00:08:42.693377 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:42 crc kubenswrapper[4824]: E0224 00:08:42.693503 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:42 crc kubenswrapper[4824]: E0224 00:08:42.693621 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:42 crc kubenswrapper[4824]: I0224 00:08:42.693836 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:42 crc kubenswrapper[4824]: E0224 00:08:42.693923 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:42 crc kubenswrapper[4824]: I0224 00:08:42.787450 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:42:30.673417665 +0000 UTC Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.405888 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.405959 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.405972 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.405995 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.406009 4824 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:08:43Z","lastTransitionTime":"2026-02-24T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.451361 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r"] Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.452100 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.456165 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.456214 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.456464 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.458639 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.544070 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98c072cc-8e2f-446c-b225-23f6d6e08ffd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qcp8r\" (UID: \"98c072cc-8e2f-446c-b225-23f6d6e08ffd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.544130 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/98c072cc-8e2f-446c-b225-23f6d6e08ffd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qcp8r\" (UID: \"98c072cc-8e2f-446c-b225-23f6d6e08ffd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.544244 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/98c072cc-8e2f-446c-b225-23f6d6e08ffd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qcp8r\" (UID: \"98c072cc-8e2f-446c-b225-23f6d6e08ffd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.544478 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98c072cc-8e2f-446c-b225-23f6d6e08ffd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qcp8r\" (UID: \"98c072cc-8e2f-446c-b225-23f6d6e08ffd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.544626 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/98c072cc-8e2f-446c-b225-23f6d6e08ffd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qcp8r\" (UID: \"98c072cc-8e2f-446c-b225-23f6d6e08ffd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.645550 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98c072cc-8e2f-446c-b225-23f6d6e08ffd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qcp8r\" (UID: \"98c072cc-8e2f-446c-b225-23f6d6e08ffd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.645640 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/98c072cc-8e2f-446c-b225-23f6d6e08ffd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qcp8r\" (UID: \"98c072cc-8e2f-446c-b225-23f6d6e08ffd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.645684 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/98c072cc-8e2f-446c-b225-23f6d6e08ffd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qcp8r\" (UID: \"98c072cc-8e2f-446c-b225-23f6d6e08ffd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.645724 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98c072cc-8e2f-446c-b225-23f6d6e08ffd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qcp8r\" (UID: \"98c072cc-8e2f-446c-b225-23f6d6e08ffd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.645748 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/98c072cc-8e2f-446c-b225-23f6d6e08ffd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qcp8r\" (UID: \"98c072cc-8e2f-446c-b225-23f6d6e08ffd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.645804 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/98c072cc-8e2f-446c-b225-23f6d6e08ffd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qcp8r\" (UID: \"98c072cc-8e2f-446c-b225-23f6d6e08ffd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.645826 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/98c072cc-8e2f-446c-b225-23f6d6e08ffd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qcp8r\" (UID: \"98c072cc-8e2f-446c-b225-23f6d6e08ffd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.646899 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/98c072cc-8e2f-446c-b225-23f6d6e08ffd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qcp8r\" (UID: \"98c072cc-8e2f-446c-b225-23f6d6e08ffd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.658069 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98c072cc-8e2f-446c-b225-23f6d6e08ffd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qcp8r\" (UID: \"98c072cc-8e2f-446c-b225-23f6d6e08ffd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.672124 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98c072cc-8e2f-446c-b225-23f6d6e08ffd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qcp8r\" (UID: \"98c072cc-8e2f-446c-b225-23f6d6e08ffd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.693243 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:43 crc kubenswrapper[4824]: E0224 00:08:43.693697 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.774746 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.788433 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 13:19:30.045191828 +0000 UTC Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.788908 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 24 00:08:43 crc kubenswrapper[4824]: I0224 00:08:43.798676 4824 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 00:08:43 crc kubenswrapper[4824]: W0224 00:08:43.800879 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98c072cc_8e2f_446c_b225_23f6d6e08ffd.slice/crio-bb6282df6113fc1e7f90312eaf17f161b56b9661ec8064503be27b816caa83e8 WatchSource:0}: Error finding container bb6282df6113fc1e7f90312eaf17f161b56b9661ec8064503be27b816caa83e8: Status 404 returned error can't find the container with id bb6282df6113fc1e7f90312eaf17f161b56b9661ec8064503be27b816caa83e8 Feb 24 00:08:44 crc kubenswrapper[4824]: I0224 00:08:44.617971 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" event={"ID":"98c072cc-8e2f-446c-b225-23f6d6e08ffd","Type":"ContainerStarted","Data":"0c475f8ecfb6b09b7dc0d57991f13c43dd262bf323ce0a90ba820375a27478af"} Feb 24 00:08:44 crc kubenswrapper[4824]: I0224 00:08:44.618037 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" event={"ID":"98c072cc-8e2f-446c-b225-23f6d6e08ffd","Type":"ContainerStarted","Data":"bb6282df6113fc1e7f90312eaf17f161b56b9661ec8064503be27b816caa83e8"} Feb 24 00:08:44 crc kubenswrapper[4824]: I0224 00:08:44.692867 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:44 crc kubenswrapper[4824]: I0224 00:08:44.692904 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:44 crc kubenswrapper[4824]: I0224 00:08:44.693001 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:44 crc kubenswrapper[4824]: E0224 00:08:44.693287 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:44 crc kubenswrapper[4824]: E0224 00:08:44.693598 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:44 crc kubenswrapper[4824]: E0224 00:08:44.693730 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:45 crc kubenswrapper[4824]: I0224 00:08:45.693097 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:45 crc kubenswrapper[4824]: E0224 00:08:45.693412 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:46 crc kubenswrapper[4824]: I0224 00:08:46.695483 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:46 crc kubenswrapper[4824]: I0224 00:08:46.695622 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:46 crc kubenswrapper[4824]: E0224 00:08:46.695784 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:46 crc kubenswrapper[4824]: I0224 00:08:46.695996 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:46 crc kubenswrapper[4824]: E0224 00:08:46.696122 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:46 crc kubenswrapper[4824]: E0224 00:08:46.696257 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:46 crc kubenswrapper[4824]: E0224 00:08:46.790149 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:08:47 crc kubenswrapper[4824]: I0224 00:08:47.692704 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:47 crc kubenswrapper[4824]: E0224 00:08:47.692852 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:48 crc kubenswrapper[4824]: I0224 00:08:48.693038 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:48 crc kubenswrapper[4824]: E0224 00:08:48.693230 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:48 crc kubenswrapper[4824]: I0224 00:08:48.693341 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:48 crc kubenswrapper[4824]: E0224 00:08:48.693897 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:48 crc kubenswrapper[4824]: I0224 00:08:48.694001 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:48 crc kubenswrapper[4824]: E0224 00:08:48.694115 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:48 crc kubenswrapper[4824]: I0224 00:08:48.694151 4824 scope.go:117] "RemoveContainer" containerID="5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac" Feb 24 00:08:48 crc kubenswrapper[4824]: E0224 00:08:48.694509 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4xjg6_openshift-ovn-kubernetes(d985b875-dd5e-4767-a4e2-209894575a8f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" Feb 24 00:08:49 crc kubenswrapper[4824]: I0224 00:08:49.693535 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:49 crc kubenswrapper[4824]: E0224 00:08:49.693707 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:50 crc kubenswrapper[4824]: I0224 00:08:50.693366 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:50 crc kubenswrapper[4824]: I0224 00:08:50.693446 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:50 crc kubenswrapper[4824]: I0224 00:08:50.693464 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:50 crc kubenswrapper[4824]: E0224 00:08:50.693607 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:50 crc kubenswrapper[4824]: E0224 00:08:50.693702 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:50 crc kubenswrapper[4824]: E0224 00:08:50.693774 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:51 crc kubenswrapper[4824]: I0224 00:08:51.693084 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:51 crc kubenswrapper[4824]: E0224 00:08:51.693623 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:51 crc kubenswrapper[4824]: E0224 00:08:51.791615 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:08:52 crc kubenswrapper[4824]: I0224 00:08:52.693481 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:52 crc kubenswrapper[4824]: I0224 00:08:52.693597 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:52 crc kubenswrapper[4824]: I0224 00:08:52.693612 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:52 crc kubenswrapper[4824]: E0224 00:08:52.693729 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:52 crc kubenswrapper[4824]: E0224 00:08:52.693826 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:52 crc kubenswrapper[4824]: E0224 00:08:52.693973 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:53 crc kubenswrapper[4824]: I0224 00:08:53.693248 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:53 crc kubenswrapper[4824]: E0224 00:08:53.693652 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:54 crc kubenswrapper[4824]: I0224 00:08:54.693779 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:54 crc kubenswrapper[4824]: I0224 00:08:54.693827 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:54 crc kubenswrapper[4824]: I0224 00:08:54.693802 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:54 crc kubenswrapper[4824]: E0224 00:08:54.693963 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:54 crc kubenswrapper[4824]: E0224 00:08:54.694173 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:54 crc kubenswrapper[4824]: E0224 00:08:54.694370 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:55 crc kubenswrapper[4824]: I0224 00:08:55.693090 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:55 crc kubenswrapper[4824]: E0224 00:08:55.693481 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:56 crc kubenswrapper[4824]: I0224 00:08:56.692999 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:56 crc kubenswrapper[4824]: I0224 00:08:56.693083 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:56 crc kubenswrapper[4824]: I0224 00:08:56.693083 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:56 crc kubenswrapper[4824]: E0224 00:08:56.696075 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:56 crc kubenswrapper[4824]: E0224 00:08:56.696220 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:56 crc kubenswrapper[4824]: E0224 00:08:56.696390 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:56 crc kubenswrapper[4824]: E0224 00:08:56.792649 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:08:57 crc kubenswrapper[4824]: I0224 00:08:57.693036 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:57 crc kubenswrapper[4824]: E0224 00:08:57.693192 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:08:58 crc kubenswrapper[4824]: I0224 00:08:58.679696 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wvqfl_15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac/kube-multus/1.log" Feb 24 00:08:58 crc kubenswrapper[4824]: I0224 00:08:58.681178 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wvqfl_15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac/kube-multus/0.log" Feb 24 00:08:58 crc kubenswrapper[4824]: I0224 00:08:58.681255 4824 generic.go:334] "Generic (PLEG): container finished" podID="15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac" containerID="a39d567bc7ae642dd15ce2ea8650e1aa0fd3688e3c3c061ad145edf2bf963c06" exitCode=1 Feb 24 00:08:58 crc kubenswrapper[4824]: I0224 00:08:58.681319 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wvqfl" event={"ID":"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac","Type":"ContainerDied","Data":"a39d567bc7ae642dd15ce2ea8650e1aa0fd3688e3c3c061ad145edf2bf963c06"} Feb 24 00:08:58 crc kubenswrapper[4824]: I0224 00:08:58.681377 4824 scope.go:117] "RemoveContainer" containerID="4cafa37efd3e722940c0c2daae6107dc324136c75f889084d39711c20d7e887d" Feb 24 00:08:58 crc kubenswrapper[4824]: I0224 00:08:58.682249 4824 scope.go:117] "RemoveContainer" containerID="a39d567bc7ae642dd15ce2ea8650e1aa0fd3688e3c3c061ad145edf2bf963c06" Feb 24 00:08:58 crc kubenswrapper[4824]: E0224 00:08:58.682641 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-wvqfl_openshift-multus(15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac)\"" pod="openshift-multus/multus-wvqfl" podUID="15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac" Feb 24 00:08:58 crc kubenswrapper[4824]: I0224 00:08:58.694143 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:58 crc kubenswrapper[4824]: I0224 00:08:58.694195 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:58 crc kubenswrapper[4824]: E0224 00:08:58.694302 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:08:58 crc kubenswrapper[4824]: I0224 00:08:58.694144 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:08:58 crc kubenswrapper[4824]: E0224 00:08:58.694492 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:08:58 crc kubenswrapper[4824]: E0224 00:08:58.694592 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:08:58 crc kubenswrapper[4824]: I0224 00:08:58.704672 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qcp8r" podStartSLOduration=128.704645773 podStartE2EDuration="2m8.704645773s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:44.642211327 +0000 UTC m=+188.631835796" watchObservedRunningTime="2026-02-24 00:08:58.704645773 +0000 UTC m=+202.694270252" Feb 24 00:08:59 crc kubenswrapper[4824]: I0224 00:08:59.687431 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wvqfl_15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac/kube-multus/1.log" Feb 24 00:08:59 crc kubenswrapper[4824]: I0224 00:08:59.693714 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:59 crc kubenswrapper[4824]: E0224 00:08:59.694097 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:09:00 crc kubenswrapper[4824]: I0224 00:09:00.693398 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:09:00 crc kubenswrapper[4824]: I0224 00:09:00.693456 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:09:00 crc kubenswrapper[4824]: I0224 00:09:00.693583 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:09:00 crc kubenswrapper[4824]: E0224 00:09:00.693588 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:09:00 crc kubenswrapper[4824]: E0224 00:09:00.693741 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:09:00 crc kubenswrapper[4824]: E0224 00:09:00.693842 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:09:01 crc kubenswrapper[4824]: I0224 00:09:01.692818 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:09:01 crc kubenswrapper[4824]: E0224 00:09:01.693007 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:09:01 crc kubenswrapper[4824]: E0224 00:09:01.794195 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:09:02 crc kubenswrapper[4824]: I0224 00:09:02.693647 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:09:02 crc kubenswrapper[4824]: I0224 00:09:02.693805 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:09:02 crc kubenswrapper[4824]: E0224 00:09:02.693807 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:09:02 crc kubenswrapper[4824]: E0224 00:09:02.694014 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:09:02 crc kubenswrapper[4824]: I0224 00:09:02.694297 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:09:02 crc kubenswrapper[4824]: E0224 00:09:02.694550 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:09:03 crc kubenswrapper[4824]: I0224 00:09:03.693592 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:09:03 crc kubenswrapper[4824]: E0224 00:09:03.694443 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:09:03 crc kubenswrapper[4824]: I0224 00:09:03.695145 4824 scope.go:117] "RemoveContainer" containerID="5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac" Feb 24 00:09:04 crc kubenswrapper[4824]: I0224 00:09:04.693367 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:09:04 crc kubenswrapper[4824]: I0224 00:09:04.693433 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:09:04 crc kubenswrapper[4824]: I0224 00:09:04.693433 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:09:04 crc kubenswrapper[4824]: E0224 00:09:04.693644 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:09:04 crc kubenswrapper[4824]: E0224 00:09:04.693743 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:09:04 crc kubenswrapper[4824]: E0224 00:09:04.693831 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:09:04 crc kubenswrapper[4824]: I0224 00:09:04.706113 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovnkube-controller/3.log" Feb 24 00:09:04 crc kubenswrapper[4824]: I0224 00:09:04.709534 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerStarted","Data":"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee"} Feb 24 00:09:04 crc kubenswrapper[4824]: I0224 00:09:04.710090 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:09:04 crc kubenswrapper[4824]: I0224 00:09:04.746952 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podStartSLOduration=134.746926636 podStartE2EDuration="2m14.746926636s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:04.746544376 +0000 UTC m=+208.736168865" watchObservedRunningTime="2026-02-24 00:09:04.746926636 +0000 UTC m=+208.736551125" Feb 24 00:09:04 crc kubenswrapper[4824]: I0224 00:09:04.805814 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-98z42"] Feb 24 00:09:04 crc kubenswrapper[4824]: I0224 00:09:04.806292 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:09:04 crc kubenswrapper[4824]: E0224 00:09:04.806399 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:09:05 crc kubenswrapper[4824]: I0224 00:09:05.693597 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:09:05 crc kubenswrapper[4824]: E0224 00:09:05.693967 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:09:06 crc kubenswrapper[4824]: I0224 00:09:06.693247 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:09:06 crc kubenswrapper[4824]: I0224 00:09:06.693304 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:09:06 crc kubenswrapper[4824]: I0224 00:09:06.693304 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:09:06 crc kubenswrapper[4824]: E0224 00:09:06.696069 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:09:06 crc kubenswrapper[4824]: E0224 00:09:06.696295 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:09:06 crc kubenswrapper[4824]: E0224 00:09:06.696408 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:09:06 crc kubenswrapper[4824]: E0224 00:09:06.795279 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:09:07 crc kubenswrapper[4824]: I0224 00:09:07.693130 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:09:07 crc kubenswrapper[4824]: E0224 00:09:07.693383 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:09:08 crc kubenswrapper[4824]: I0224 00:09:08.693582 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:09:08 crc kubenswrapper[4824]: I0224 00:09:08.693680 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:09:08 crc kubenswrapper[4824]: I0224 00:09:08.693732 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:09:08 crc kubenswrapper[4824]: E0224 00:09:08.694330 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:09:08 crc kubenswrapper[4824]: E0224 00:09:08.694757 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:09:08 crc kubenswrapper[4824]: E0224 00:09:08.694680 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:09:08 crc kubenswrapper[4824]: I0224 00:09:08.731081 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:08 crc kubenswrapper[4824]: I0224 00:09:08.731282 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:09:08 crc kubenswrapper[4824]: I0224 00:09:08.731315 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:09:08 crc kubenswrapper[4824]: E0224 00:09:08.731377 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:11:10.731332748 +0000 UTC m=+334.720957297 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:08 crc kubenswrapper[4824]: E0224 00:09:08.731444 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:09:08 crc kubenswrapper[4824]: E0224 00:09:08.731461 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:09:08 crc kubenswrapper[4824]: E0224 00:09:08.731473 4824 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:09:08 crc kubenswrapper[4824]: I0224 00:09:08.731487 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:09:08 crc kubenswrapper[4824]: E0224 00:09:08.731551 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 00:11:10.731512322 +0000 UTC m=+334.721136981 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:09:08 crc kubenswrapper[4824]: I0224 00:09:08.731595 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:09:08 crc kubenswrapper[4824]: E0224 00:09:08.731617 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:09:08 crc kubenswrapper[4824]: E0224 00:09:08.731835 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:09:08 crc kubenswrapper[4824]: E0224 00:09:08.731854 4824 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:09:08 crc kubenswrapper[4824]: E0224 00:09:08.731910 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 00:11:10.731897432 +0000 UTC m=+334.721522101 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:09:08 crc kubenswrapper[4824]: E0224 00:09:08.731710 4824 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:09:08 crc kubenswrapper[4824]: E0224 00:09:08.732140 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:11:10.732132999 +0000 UTC m=+334.721757468 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:09:08 crc kubenswrapper[4824]: E0224 00:09:08.731733 4824 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:09:08 crc kubenswrapper[4824]: E0224 00:09:08.732244 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:11:10.732237431 +0000 UTC m=+334.721861900 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:09:09 crc kubenswrapper[4824]: I0224 00:09:09.692864 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:09:09 crc kubenswrapper[4824]: E0224 00:09:09.693032 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:09:10 crc kubenswrapper[4824]: I0224 00:09:10.693030 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:09:10 crc kubenswrapper[4824]: I0224 00:09:10.693149 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:09:10 crc kubenswrapper[4824]: E0224 00:09:10.693280 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:09:10 crc kubenswrapper[4824]: E0224 00:09:10.693421 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:09:10 crc kubenswrapper[4824]: I0224 00:09:10.693753 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:09:10 crc kubenswrapper[4824]: E0224 00:09:10.693941 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:09:11 crc kubenswrapper[4824]: I0224 00:09:11.693186 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:09:11 crc kubenswrapper[4824]: E0224 00:09:11.693421 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:09:11 crc kubenswrapper[4824]: I0224 00:09:11.693907 4824 scope.go:117] "RemoveContainer" containerID="a39d567bc7ae642dd15ce2ea8650e1aa0fd3688e3c3c061ad145edf2bf963c06" Feb 24 00:09:11 crc kubenswrapper[4824]: E0224 00:09:11.796544 4824 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:09:12 crc kubenswrapper[4824]: I0224 00:09:12.693243 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:09:12 crc kubenswrapper[4824]: I0224 00:09:12.693347 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:09:12 crc kubenswrapper[4824]: E0224 00:09:12.693414 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:09:12 crc kubenswrapper[4824]: E0224 00:09:12.693602 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:09:12 crc kubenswrapper[4824]: I0224 00:09:12.693786 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:09:12 crc kubenswrapper[4824]: E0224 00:09:12.693874 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:09:12 crc kubenswrapper[4824]: I0224 00:09:12.738846 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wvqfl_15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac/kube-multus/1.log" Feb 24 00:09:12 crc kubenswrapper[4824]: I0224 00:09:12.738916 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wvqfl" event={"ID":"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac","Type":"ContainerStarted","Data":"e2df584c430cf17f7bb0674c0cc149453f39f49408337d9789565a34a1bfcb68"} Feb 24 00:09:13 crc kubenswrapper[4824]: I0224 00:09:13.692889 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:09:13 crc kubenswrapper[4824]: E0224 00:09:13.693404 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:09:14 crc kubenswrapper[4824]: I0224 00:09:14.693306 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:09:14 crc kubenswrapper[4824]: I0224 00:09:14.693390 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:09:14 crc kubenswrapper[4824]: I0224 00:09:14.693488 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:09:14 crc kubenswrapper[4824]: E0224 00:09:14.693606 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:09:14 crc kubenswrapper[4824]: E0224 00:09:14.693794 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:09:14 crc kubenswrapper[4824]: E0224 00:09:14.693995 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:09:15 crc kubenswrapper[4824]: I0224 00:09:15.693072 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:09:15 crc kubenswrapper[4824]: E0224 00:09:15.693269 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:09:16 crc kubenswrapper[4824]: I0224 00:09:16.693577 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:09:16 crc kubenswrapper[4824]: I0224 00:09:16.693649 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:09:16 crc kubenswrapper[4824]: E0224 00:09:16.695987 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:09:16 crc kubenswrapper[4824]: I0224 00:09:16.696050 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:09:16 crc kubenswrapper[4824]: E0224 00:09:16.696142 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:09:16 crc kubenswrapper[4824]: E0224 00:09:16.696237 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-98z42" podUID="a648113f-3e46-4170-ba30-7155fefbb413" Feb 24 00:09:17 crc kubenswrapper[4824]: I0224 00:09:17.693362 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:09:17 crc kubenswrapper[4824]: I0224 00:09:17.696696 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 00:09:17 crc kubenswrapper[4824]: I0224 00:09:17.696868 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 00:09:18 crc kubenswrapper[4824]: I0224 00:09:18.693092 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:09:18 crc kubenswrapper[4824]: I0224 00:09:18.693180 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:09:18 crc kubenswrapper[4824]: I0224 00:09:18.693183 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:09:18 crc kubenswrapper[4824]: I0224 00:09:18.697066 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 00:09:18 crc kubenswrapper[4824]: I0224 00:09:18.697186 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 24 00:09:18 crc kubenswrapper[4824]: I0224 00:09:18.697731 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 24 00:09:18 crc kubenswrapper[4824]: I0224 00:09:18.698232 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 24 00:09:23 crc kubenswrapper[4824]: I0224 00:09:23.276933 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:09:23 crc kubenswrapper[4824]: I0224 00:09:23.277035 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:09:23 crc kubenswrapper[4824]: I0224 00:09:23.711842 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.030024 4824 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.080254 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jm7qk"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.080824 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.083215 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.083404 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.084117 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kh6hg"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.084849 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-kh6hg" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.084852 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7jnm"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.085581 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7jnm" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.086161 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.086634 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.087099 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.087202 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7vdck"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.087572 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.087750 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.088156 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.088252 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.093744 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.098142 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.098188 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.098239 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.098307 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.098360 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.100246 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.100302 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.100315 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.100398 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.100415 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.100491 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.100887 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.101000 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.101036 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.101136 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.101244 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.101279 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.101361 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.101383 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.101495 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.101616 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.101700 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.101739 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.101818 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.102097 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.102303 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.102586 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.102814 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.102955 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.103239 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.103382 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.104026 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.104752 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.105421 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.110320 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.110590 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.113844 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.113980 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.114229 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.115256 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pvqdd"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.115424 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.115625 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.115818 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.118110 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-r4c4b"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.118545 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.118663 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-r4c4b" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.118938 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.119062 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.119107 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.128832 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.129206 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.130940 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.131154 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.131212 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.131155 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.131371 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-h7djl"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.132070 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.133793 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-h7djl" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.134367 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwm9v"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.139573 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.139604 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwm9v" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.139986 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.142024 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xfl22"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.148334 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.148572 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.148712 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjdhm"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.148903 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.149070 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.149269 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.149414 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.149579 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.149595 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjdhm" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.149654 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.149711 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.149866 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.154790 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jf5jw"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.155280 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-p5tqf"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.155685 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-p5tqf" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.155822 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.157904 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.158247 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.158359 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.158467 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.158611 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.158743 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.158806 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.159021 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.159195 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.159311 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.159416 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.159539 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.159639 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.159728 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.159821 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.160090 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.160252 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.162583 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.162719 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.162763 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.163942 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.167652 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29531520-969xh"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.168089 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2k2f"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.168335 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.168813 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.169131 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.169601 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2k2f" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.170083 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.170109 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29531520-969xh" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.170256 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.172622 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.173320 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8dlbn"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.173805 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8dlbn" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.174123 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.176858 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.178105 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.178234 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.178607 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.179453 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.179632 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.179858 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.180302 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-fp4wq"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.180582 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.180695 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.180771 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.181263 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.181372 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.181758 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ksmk7"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.182390 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ksmk7" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.183645 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.189836 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.190379 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q8hvw"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.191073 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q8hvw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.193482 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cxlfh"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.195858 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-cxlfh" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.214186 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ccm27"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.215281 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.228632 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.231094 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.237672 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srljx\" (UniqueName: \"kubernetes.io/projected/390f4e92-8639-45bb-b91c-a55773bfa293-kube-api-access-srljx\") pod \"openshift-apiserver-operator-796bbdcf4f-m7jnm\" (UID: \"390f4e92-8639-45bb-b91c-a55773bfa293\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7jnm" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.237730 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-audit-dir\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.237751 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.237763 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/53344821-2f26-459a-9e42-003f3f1b5a87-images\") pod \"machine-api-operator-5694c8668f-kh6hg\" (UID: \"53344821-2f26-459a-9e42-003f3f1b5a87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh6hg" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.237854 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-client-ca\") pod \"controller-manager-879f6c89f-jm7qk\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.237880 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-serving-cert\") pod \"controller-manager-879f6c89f-jm7qk\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.237933 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-config\") pod \"route-controller-manager-6576b87f9c-9k27r\" (UID: \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.237966 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110-etcd-ca\") pod \"etcd-operator-b45778765-pvqdd\" (UID: \"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.237997 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9qkd\" (UniqueName: \"kubernetes.io/projected/01ed973e-7ed7-41ec-bea9-69d8c86e19ed-kube-api-access-n9qkd\") pod \"openshift-config-operator-7777fb866f-9ml5g\" (UID: \"01ed973e-7ed7-41ec-bea9-69d8c86e19ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.238148 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/53344821-2f26-459a-9e42-003f3f1b5a87-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kh6hg\" (UID: \"53344821-2f26-459a-9e42-003f3f1b5a87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh6hg" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.238193 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110-etcd-service-ca\") pod \"etcd-operator-b45778765-pvqdd\" (UID: \"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.238316 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.238778 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-audit-policies\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.238851 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-etcd-client\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.238893 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/01ed973e-7ed7-41ec-bea9-69d8c86e19ed-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9ml5g\" (UID: \"01ed973e-7ed7-41ec-bea9-69d8c86e19ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.238933 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2595ad7b-d8cc-46d2-b0db-60e7ce636e9f-serving-cert\") pod \"authentication-operator-69f744f599-7vdck\" (UID: \"2595ad7b-d8cc-46d2-b0db-60e7ce636e9f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.238961 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ed973e-7ed7-41ec-bea9-69d8c86e19ed-serving-cert\") pod \"openshift-config-operator-7777fb866f-9ml5g\" (UID: \"01ed973e-7ed7-41ec-bea9-69d8c86e19ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.238983 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110-serving-cert\") pod \"etcd-operator-b45778765-pvqdd\" (UID: \"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239015 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw5js\" (UniqueName: \"kubernetes.io/projected/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-kube-api-access-nw5js\") pod \"route-controller-manager-6576b87f9c-9k27r\" (UID: \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239049 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-encryption-config\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239078 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-client-ca\") pod \"route-controller-manager-6576b87f9c-9k27r\" (UID: \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239105 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqjj7\" (UniqueName: \"kubernetes.io/projected/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-kube-api-access-sqjj7\") pod \"controller-manager-879f6c89f-jm7qk\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239136 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2595ad7b-d8cc-46d2-b0db-60e7ce636e9f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7vdck\" (UID: \"2595ad7b-d8cc-46d2-b0db-60e7ce636e9f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239179 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110-config\") pod \"etcd-operator-b45778765-pvqdd\" (UID: \"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239202 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctbsw\" (UniqueName: \"kubernetes.io/projected/53344821-2f26-459a-9e42-003f3f1b5a87-kube-api-access-ctbsw\") pod \"machine-api-operator-5694c8668f-kh6hg\" (UID: \"53344821-2f26-459a-9e42-003f3f1b5a87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh6hg" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239238 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-serving-cert\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239303 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53344821-2f26-459a-9e42-003f3f1b5a87-config\") pod \"machine-api-operator-5694c8668f-kh6hg\" (UID: \"53344821-2f26-459a-9e42-003f3f1b5a87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh6hg" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239351 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/390f4e92-8639-45bb-b91c-a55773bfa293-config\") pod \"openshift-apiserver-operator-796bbdcf4f-m7jnm\" (UID: \"390f4e92-8639-45bb-b91c-a55773bfa293\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7jnm" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239399 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239447 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmr92\" (UniqueName: \"kubernetes.io/projected/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-kube-api-access-hmr92\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239473 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110-etcd-client\") pod \"etcd-operator-b45778765-pvqdd\" (UID: \"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239538 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239565 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2595ad7b-d8cc-46d2-b0db-60e7ce636e9f-service-ca-bundle\") pod \"authentication-operator-69f744f599-7vdck\" (UID: \"2595ad7b-d8cc-46d2-b0db-60e7ce636e9f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239591 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zw9s\" (UniqueName: \"kubernetes.io/projected/2595ad7b-d8cc-46d2-b0db-60e7ce636e9f-kube-api-access-6zw9s\") pod \"authentication-operator-69f744f599-7vdck\" (UID: \"2595ad7b-d8cc-46d2-b0db-60e7ce636e9f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239616 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-serving-cert\") pod \"route-controller-manager-6576b87f9c-9k27r\" (UID: \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239637 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f4fs\" (UniqueName: \"kubernetes.io/projected/d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110-kube-api-access-4f4fs\") pod \"etcd-operator-b45778765-pvqdd\" (UID: \"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239662 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/390f4e92-8639-45bb-b91c-a55773bfa293-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-m7jnm\" (UID: \"390f4e92-8639-45bb-b91c-a55773bfa293\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7jnm" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239727 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jm7qk\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239761 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-config\") pod \"controller-manager-879f6c89f-jm7qk\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239788 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj72b\" (UniqueName: \"kubernetes.io/projected/581e69ae-c21a-4a9e-b1ea-9c38256d7b30-kube-api-access-tj72b\") pod \"downloads-7954f5f757-r4c4b\" (UID: \"581e69ae-c21a-4a9e-b1ea-9c38256d7b30\") " pod="openshift-console/downloads-7954f5f757-r4c4b" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.239831 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2595ad7b-d8cc-46d2-b0db-60e7ce636e9f-config\") pod \"authentication-operator-69f744f599-7vdck\" (UID: \"2595ad7b-d8cc-46d2-b0db-60e7ce636e9f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.241262 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c94hb"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.243413 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.244810 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.245258 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.245630 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-99tkw"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.245786 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.245786 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.246133 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9bck"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.246329 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.246489 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9bck" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.246535 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c94hb" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.246913 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5ltfz"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.249300 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8krrp"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.249511 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5ltfz" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.249845 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.249954 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8krrp" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.250342 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.250788 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.251660 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.252072 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6h296"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.253925 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2jqvq"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.254004 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6h296" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.254658 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.255050 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.255165 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2jqvq" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.255997 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-zlnwh"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.256563 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.257227 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kh6hg"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.258778 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4tvd9"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.259977 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4tvd9" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.260327 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.261868 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pvqdd"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.262998 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-r4c4b"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.264334 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwm9v"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.265561 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jm7qk"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.266930 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29531520-969xh"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.268224 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7jnm"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.269506 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.270801 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.272038 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c94hb"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.273213 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cxlfh"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.274154 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2k2f"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.275198 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.276101 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8dlbn"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.277249 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jf5jw"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.278073 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.279984 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q8hvw"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.280975 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-p5tqf"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.283309 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.285502 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.290465 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.291599 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7vdck"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.294214 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.295596 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dq9gz"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.297353 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-vvlvv"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.297550 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.298044 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vvlvv" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.298302 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-h7djl"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.299497 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4tvd9"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.300987 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ksmk7"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.306277 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8krrp"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.308333 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xfl22"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.309021 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.312950 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zlnwh"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.314007 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjdhm"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.319362 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ccm27"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.321505 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2jqvq"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.323126 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.324729 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.327464 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-99tkw"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.328974 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9bck"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.330345 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dq9gz"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.332076 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6h296"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.333318 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.334349 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5ltfz"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.335439 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5n768"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.336479 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5n768"] Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.336602 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5n768" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.341144 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2595ad7b-d8cc-46d2-b0db-60e7ce636e9f-config\") pod \"authentication-operator-69f744f599-7vdck\" (UID: \"2595ad7b-d8cc-46d2-b0db-60e7ce636e9f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.341191 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srljx\" (UniqueName: \"kubernetes.io/projected/390f4e92-8639-45bb-b91c-a55773bfa293-kube-api-access-srljx\") pod \"openshift-apiserver-operator-796bbdcf4f-m7jnm\" (UID: \"390f4e92-8639-45bb-b91c-a55773bfa293\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7jnm" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.341219 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/53344821-2f26-459a-9e42-003f3f1b5a87-images\") pod \"machine-api-operator-5694c8668f-kh6hg\" (UID: \"53344821-2f26-459a-9e42-003f3f1b5a87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh6hg" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.341239 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-audit-dir\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.341257 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-client-ca\") pod \"controller-manager-879f6c89f-jm7qk\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.341274 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-serving-cert\") pod \"controller-manager-879f6c89f-jm7qk\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.341297 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2tmt\" (UniqueName: \"kubernetes.io/projected/b14c3eec-796c-48b0-b4fe-67cb327f2de7-kube-api-access-c2tmt\") pod \"machine-config-controller-84d6567774-ksmk7\" (UID: \"b14c3eec-796c-48b0-b4fe-67cb327f2de7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ksmk7" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.341326 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/13bff804-f118-473b-a547-433aed671b46-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-q8hvw\" (UID: \"13bff804-f118-473b-a547-433aed671b46\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q8hvw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.341352 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b14c3eec-796c-48b0-b4fe-67cb327f2de7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ksmk7\" (UID: \"b14c3eec-796c-48b0-b4fe-67cb327f2de7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ksmk7" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.341359 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-audit-dir\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.341384 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6f8699c7-58f5-4a80-b5af-5403cb178676-audit-dir\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.341697 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.342546 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.342728 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-config\") pod \"route-controller-manager-6576b87f9c-9k27r\" (UID: \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.342777 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.342810 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hl2rv\" (UID: \"33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.342759 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/53344821-2f26-459a-9e42-003f3f1b5a87-images\") pod \"machine-api-operator-5694c8668f-kh6hg\" (UID: \"53344821-2f26-459a-9e42-003f3f1b5a87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh6hg" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.342840 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110-etcd-ca\") pod \"etcd-operator-b45778765-pvqdd\" (UID: \"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.342923 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9qkd\" (UniqueName: \"kubernetes.io/projected/01ed973e-7ed7-41ec-bea9-69d8c86e19ed-kube-api-access-n9qkd\") pod \"openshift-config-operator-7777fb866f-9ml5g\" (UID: \"01ed973e-7ed7-41ec-bea9-69d8c86e19ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.342967 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/53344821-2f26-459a-9e42-003f3f1b5a87-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kh6hg\" (UID: \"53344821-2f26-459a-9e42-003f3f1b5a87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh6hg" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343007 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4-images\") pod \"machine-config-operator-74547568cd-hl2rv\" (UID: \"33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343036 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4-proxy-tls\") pod \"machine-config-operator-74547568cd-hl2rv\" (UID: \"33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343062 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110-etcd-service-ca\") pod \"etcd-operator-b45778765-pvqdd\" (UID: \"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343087 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343118 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343144 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4dl5\" (UniqueName: \"kubernetes.io/projected/6f8699c7-58f5-4a80-b5af-5403cb178676-kube-api-access-z4dl5\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343176 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-audit-policies\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343203 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/250422bb-6e8f-4622-a456-ded5825e7c86-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lwm9v\" (UID: \"250422bb-6e8f-4622-a456-ded5825e7c86\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwm9v" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343234 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-etcd-client\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343265 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343305 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/01ed973e-7ed7-41ec-bea9-69d8c86e19ed-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9ml5g\" (UID: \"01ed973e-7ed7-41ec-bea9-69d8c86e19ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343327 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b14c3eec-796c-48b0-b4fe-67cb327f2de7-proxy-tls\") pod \"machine-config-controller-84d6567774-ksmk7\" (UID: \"b14c3eec-796c-48b0-b4fe-67cb327f2de7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ksmk7" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343354 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2595ad7b-d8cc-46d2-b0db-60e7ce636e9f-serving-cert\") pod \"authentication-operator-69f744f599-7vdck\" (UID: \"2595ad7b-d8cc-46d2-b0db-60e7ce636e9f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343376 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343399 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ed973e-7ed7-41ec-bea9-69d8c86e19ed-serving-cert\") pod \"openshift-config-operator-7777fb866f-9ml5g\" (UID: \"01ed973e-7ed7-41ec-bea9-69d8c86e19ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343427 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw5js\" (UniqueName: \"kubernetes.io/projected/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-kube-api-access-nw5js\") pod \"route-controller-manager-6576b87f9c-9k27r\" (UID: \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343449 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110-serving-cert\") pod \"etcd-operator-b45778765-pvqdd\" (UID: \"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343476 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-encryption-config\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343496 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-client-ca\") pod \"route-controller-manager-6576b87f9c-9k27r\" (UID: \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343537 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/250422bb-6e8f-4622-a456-ded5825e7c86-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lwm9v\" (UID: \"250422bb-6e8f-4622-a456-ded5825e7c86\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwm9v" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343566 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqjj7\" (UniqueName: \"kubernetes.io/projected/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-kube-api-access-sqjj7\") pod \"controller-manager-879f6c89f-jm7qk\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343593 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2595ad7b-d8cc-46d2-b0db-60e7ce636e9f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7vdck\" (UID: \"2595ad7b-d8cc-46d2-b0db-60e7ce636e9f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343638 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85fxs\" (UniqueName: \"kubernetes.io/projected/13bff804-f118-473b-a547-433aed671b46-kube-api-access-85fxs\") pod \"control-plane-machine-set-operator-78cbb6b69f-q8hvw\" (UID: \"13bff804-f118-473b-a547-433aed671b46\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q8hvw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343656 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110-etcd-ca\") pod \"etcd-operator-b45778765-pvqdd\" (UID: \"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343667 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110-config\") pod \"etcd-operator-b45778765-pvqdd\" (UID: \"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343718 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110-etcd-service-ca\") pod \"etcd-operator-b45778765-pvqdd\" (UID: \"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343729 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctbsw\" (UniqueName: \"kubernetes.io/projected/53344821-2f26-459a-9e42-003f3f1b5a87-kube-api-access-ctbsw\") pod \"machine-api-operator-5694c8668f-kh6hg\" (UID: \"53344821-2f26-459a-9e42-003f3f1b5a87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh6hg" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343795 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk6p9\" (UniqueName: \"kubernetes.io/projected/250422bb-6e8f-4622-a456-ded5825e7c86-kube-api-access-sk6p9\") pod \"openshift-controller-manager-operator-756b6f6bc6-lwm9v\" (UID: \"250422bb-6e8f-4622-a456-ded5825e7c86\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwm9v" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343833 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-serving-cert\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343852 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53344821-2f26-459a-9e42-003f3f1b5a87-config\") pod \"machine-api-operator-5694c8668f-kh6hg\" (UID: \"53344821-2f26-459a-9e42-003f3f1b5a87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh6hg" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343870 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/390f4e92-8639-45bb-b91c-a55773bfa293-config\") pod \"openshift-apiserver-operator-796bbdcf4f-m7jnm\" (UID: \"390f4e92-8639-45bb-b91c-a55773bfa293\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7jnm" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343893 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8118fe3c-1479-4634-9b64-9350991d909d-metrics-tls\") pod \"dns-operator-744455d44c-h7djl\" (UID: \"8118fe3c-1479-4634-9b64-9350991d909d\") " pod="openshift-dns-operator/dns-operator-744455d44c-h7djl" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343917 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343927 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-config\") pod \"route-controller-manager-6576b87f9c-9k27r\" (UID: \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343940 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343935 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmr92\" (UniqueName: \"kubernetes.io/projected/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-kube-api-access-hmr92\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.343988 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110-etcd-client\") pod \"etcd-operator-b45778765-pvqdd\" (UID: \"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344019 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55rl5\" (UniqueName: \"kubernetes.io/projected/8118fe3c-1479-4634-9b64-9350991d909d-kube-api-access-55rl5\") pod \"dns-operator-744455d44c-h7djl\" (UID: \"8118fe3c-1479-4634-9b64-9350991d909d\") " pod="openshift-dns-operator/dns-operator-744455d44c-h7djl" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344047 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgctm\" (UniqueName: \"kubernetes.io/projected/33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4-kube-api-access-vgctm\") pod \"machine-config-operator-74547568cd-hl2rv\" (UID: \"33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344068 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344093 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344117 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2595ad7b-d8cc-46d2-b0db-60e7ce636e9f-service-ca-bundle\") pod \"authentication-operator-69f744f599-7vdck\" (UID: \"2595ad7b-d8cc-46d2-b0db-60e7ce636e9f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344142 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zw9s\" (UniqueName: \"kubernetes.io/projected/2595ad7b-d8cc-46d2-b0db-60e7ce636e9f-kube-api-access-6zw9s\") pod \"authentication-operator-69f744f599-7vdck\" (UID: \"2595ad7b-d8cc-46d2-b0db-60e7ce636e9f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344170 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344193 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f4fs\" (UniqueName: \"kubernetes.io/projected/d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110-kube-api-access-4f4fs\") pod \"etcd-operator-b45778765-pvqdd\" (UID: \"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344214 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/390f4e92-8639-45bb-b91c-a55773bfa293-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-m7jnm\" (UID: \"390f4e92-8639-45bb-b91c-a55773bfa293\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7jnm" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344232 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-serving-cert\") pod \"route-controller-manager-6576b87f9c-9k27r\" (UID: \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344270 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110-config\") pod \"etcd-operator-b45778765-pvqdd\" (UID: \"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344280 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344341 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344369 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jm7qk\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344398 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-config\") pod \"controller-manager-879f6c89f-jm7qk\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344426 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj72b\" (UniqueName: \"kubernetes.io/projected/581e69ae-c21a-4a9e-b1ea-9c38256d7b30-kube-api-access-tj72b\") pod \"downloads-7954f5f757-r4c4b\" (UID: \"581e69ae-c21a-4a9e-b1ea-9c38256d7b30\") " pod="openshift-console/downloads-7954f5f757-r4c4b" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344458 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-audit-policies\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344617 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53344821-2f26-459a-9e42-003f3f1b5a87-config\") pod \"machine-api-operator-5694c8668f-kh6hg\" (UID: \"53344821-2f26-459a-9e42-003f3f1b5a87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh6hg" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.344790 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-audit-policies\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.345199 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/390f4e92-8639-45bb-b91c-a55773bfa293-config\") pod \"openshift-apiserver-operator-796bbdcf4f-m7jnm\" (UID: \"390f4e92-8639-45bb-b91c-a55773bfa293\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7jnm" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.345274 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.346092 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.346355 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/01ed973e-7ed7-41ec-bea9-69d8c86e19ed-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9ml5g\" (UID: \"01ed973e-7ed7-41ec-bea9-69d8c86e19ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.346582 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2595ad7b-d8cc-46d2-b0db-60e7ce636e9f-service-ca-bundle\") pod \"authentication-operator-69f744f599-7vdck\" (UID: \"2595ad7b-d8cc-46d2-b0db-60e7ce636e9f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.347211 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jm7qk\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.347279 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-client-ca\") pod \"route-controller-manager-6576b87f9c-9k27r\" (UID: \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.347356 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2595ad7b-d8cc-46d2-b0db-60e7ce636e9f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-7vdck\" (UID: \"2595ad7b-d8cc-46d2-b0db-60e7ce636e9f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.348113 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-config\") pod \"controller-manager-879f6c89f-jm7qk\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.349091 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-serving-cert\") pod \"controller-manager-879f6c89f-jm7qk\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.349159 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-etcd-client\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.349266 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2595ad7b-d8cc-46d2-b0db-60e7ce636e9f-config\") pod \"authentication-operator-69f744f599-7vdck\" (UID: \"2595ad7b-d8cc-46d2-b0db-60e7ce636e9f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.349358 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ed973e-7ed7-41ec-bea9-69d8c86e19ed-serving-cert\") pod \"openshift-config-operator-7777fb866f-9ml5g\" (UID: \"01ed973e-7ed7-41ec-bea9-69d8c86e19ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.349471 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/53344821-2f26-459a-9e42-003f3f1b5a87-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-kh6hg\" (UID: \"53344821-2f26-459a-9e42-003f3f1b5a87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh6hg" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.350222 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/390f4e92-8639-45bb-b91c-a55773bfa293-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-m7jnm\" (UID: \"390f4e92-8639-45bb-b91c-a55773bfa293\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7jnm" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.350598 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2595ad7b-d8cc-46d2-b0db-60e7ce636e9f-serving-cert\") pod \"authentication-operator-69f744f599-7vdck\" (UID: \"2595ad7b-d8cc-46d2-b0db-60e7ce636e9f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.351652 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-serving-cert\") pod \"route-controller-manager-6576b87f9c-9k27r\" (UID: \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.351799 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-encryption-config\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.352708 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-serving-cert\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.352712 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110-serving-cert\") pod \"etcd-operator-b45778765-pvqdd\" (UID: \"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.353932 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110-etcd-client\") pod \"etcd-operator-b45778765-pvqdd\" (UID: \"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.363615 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.365259 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-client-ca\") pod \"controller-manager-879f6c89f-jm7qk\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.383765 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.403875 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.423833 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.443430 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445190 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/250422bb-6e8f-4622-a456-ded5825e7c86-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lwm9v\" (UID: \"250422bb-6e8f-4622-a456-ded5825e7c86\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwm9v" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445256 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85fxs\" (UniqueName: \"kubernetes.io/projected/13bff804-f118-473b-a547-433aed671b46-kube-api-access-85fxs\") pod \"control-plane-machine-set-operator-78cbb6b69f-q8hvw\" (UID: \"13bff804-f118-473b-a547-433aed671b46\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q8hvw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445284 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk6p9\" (UniqueName: \"kubernetes.io/projected/250422bb-6e8f-4622-a456-ded5825e7c86-kube-api-access-sk6p9\") pod \"openshift-controller-manager-operator-756b6f6bc6-lwm9v\" (UID: \"250422bb-6e8f-4622-a456-ded5825e7c86\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwm9v" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445313 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8118fe3c-1479-4634-9b64-9350991d909d-metrics-tls\") pod \"dns-operator-744455d44c-h7djl\" (UID: \"8118fe3c-1479-4634-9b64-9350991d909d\") " pod="openshift-dns-operator/dns-operator-744455d44c-h7djl" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445338 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55rl5\" (UniqueName: \"kubernetes.io/projected/8118fe3c-1479-4634-9b64-9350991d909d-kube-api-access-55rl5\") pod \"dns-operator-744455d44c-h7djl\" (UID: \"8118fe3c-1479-4634-9b64-9350991d909d\") " pod="openshift-dns-operator/dns-operator-744455d44c-h7djl" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445356 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445377 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgctm\" (UniqueName: \"kubernetes.io/projected/33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4-kube-api-access-vgctm\") pod \"machine-config-operator-74547568cd-hl2rv\" (UID: \"33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445404 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445444 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445463 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445490 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-audit-policies\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445554 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2tmt\" (UniqueName: \"kubernetes.io/projected/b14c3eec-796c-48b0-b4fe-67cb327f2de7-kube-api-access-c2tmt\") pod \"machine-config-controller-84d6567774-ksmk7\" (UID: \"b14c3eec-796c-48b0-b4fe-67cb327f2de7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ksmk7" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445575 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/13bff804-f118-473b-a547-433aed671b46-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-q8hvw\" (UID: \"13bff804-f118-473b-a547-433aed671b46\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q8hvw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445597 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b14c3eec-796c-48b0-b4fe-67cb327f2de7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ksmk7\" (UID: \"b14c3eec-796c-48b0-b4fe-67cb327f2de7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ksmk7" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445623 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6f8699c7-58f5-4a80-b5af-5403cb178676-audit-dir\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445642 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445662 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445684 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445704 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hl2rv\" (UID: \"33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445731 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4-images\") pod \"machine-config-operator-74547568cd-hl2rv\" (UID: \"33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445749 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4-proxy-tls\") pod \"machine-config-operator-74547568cd-hl2rv\" (UID: \"33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445770 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445789 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445809 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4dl5\" (UniqueName: \"kubernetes.io/projected/6f8699c7-58f5-4a80-b5af-5403cb178676-kube-api-access-z4dl5\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445828 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/250422bb-6e8f-4622-a456-ded5825e7c86-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lwm9v\" (UID: \"250422bb-6e8f-4622-a456-ded5825e7c86\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwm9v" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445868 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445886 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b14c3eec-796c-48b0-b4fe-67cb327f2de7-proxy-tls\") pod \"machine-config-controller-84d6567774-ksmk7\" (UID: \"b14c3eec-796c-48b0-b4fe-67cb327f2de7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ksmk7" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.445906 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.446199 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6f8699c7-58f5-4a80-b5af-5403cb178676-audit-dir\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.447296 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-audit-policies\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.447555 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.448206 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4-images\") pod \"machine-config-operator-74547568cd-hl2rv\" (UID: \"33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.448338 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b14c3eec-796c-48b0-b4fe-67cb327f2de7-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-ksmk7\" (UID: \"b14c3eec-796c-48b0-b4fe-67cb327f2de7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ksmk7" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.448789 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.449611 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.450061 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hl2rv\" (UID: \"33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.450143 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.450243 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8118fe3c-1479-4634-9b64-9350991d909d-metrics-tls\") pod \"dns-operator-744455d44c-h7djl\" (UID: \"8118fe3c-1479-4634-9b64-9350991d909d\") " pod="openshift-dns-operator/dns-operator-744455d44c-h7djl" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.450318 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/250422bb-6e8f-4622-a456-ded5825e7c86-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lwm9v\" (UID: \"250422bb-6e8f-4622-a456-ded5825e7c86\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwm9v" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.450508 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.450726 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/250422bb-6e8f-4622-a456-ded5825e7c86-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lwm9v\" (UID: \"250422bb-6e8f-4622-a456-ded5825e7c86\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwm9v" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.451761 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.452507 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.452740 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.453146 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.454146 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4-proxy-tls\") pod \"machine-config-operator-74547568cd-hl2rv\" (UID: \"33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.454309 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.455881 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.463505 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.483537 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.503925 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.523275 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.543789 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.563213 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.583435 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.604018 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.624600 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.644202 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.664105 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.686148 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.704171 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.724070 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.743774 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.762407 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.782801 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.791647 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b14c3eec-796c-48b0-b4fe-67cb327f2de7-proxy-tls\") pod \"machine-config-controller-84d6567774-ksmk7\" (UID: \"b14c3eec-796c-48b0-b4fe-67cb327f2de7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ksmk7" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.803998 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.824075 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.830688 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/13bff804-f118-473b-a547-433aed671b46-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-q8hvw\" (UID: \"13bff804-f118-473b-a547-433aed671b46\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q8hvw" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.844021 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.887477 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.889975 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.903928 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.923748 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.943707 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 00:09:24 crc kubenswrapper[4824]: I0224 00:09:24.963180 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.003346 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.024714 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.043559 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.064513 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.092721 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.103204 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.123433 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.142779 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.183446 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.183725 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.203844 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.223363 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.243966 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.261942 4824 request.go:700] Waited for 1.01329131s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&limit=500&resourceVersion=0 Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.263827 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.284205 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.305091 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.323120 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.343262 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.364103 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.384652 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.403873 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.424318 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.443646 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.464279 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.484621 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.504240 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.533507 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.543962 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.564055 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.583853 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.604577 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.623585 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.643940 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.663633 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.684484 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.705199 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.724127 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.743193 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.763270 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.784724 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.808771 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.825205 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.844503 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.873262 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.883575 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.912380 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.922945 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.943119 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.963623 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 24 00:09:25 crc kubenswrapper[4824]: I0224 00:09:25.983450 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.023479 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.044066 4824 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.064217 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.085869 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.103979 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.124104 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.158721 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.163365 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.183762 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.234726 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srljx\" (UniqueName: \"kubernetes.io/projected/390f4e92-8639-45bb-b91c-a55773bfa293-kube-api-access-srljx\") pod \"openshift-apiserver-operator-796bbdcf4f-m7jnm\" (UID: \"390f4e92-8639-45bb-b91c-a55773bfa293\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7jnm" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.246344 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7jnm" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.254282 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9qkd\" (UniqueName: \"kubernetes.io/projected/01ed973e-7ed7-41ec-bea9-69d8c86e19ed-kube-api-access-n9qkd\") pod \"openshift-config-operator-7777fb866f-9ml5g\" (UID: \"01ed973e-7ed7-41ec-bea9-69d8c86e19ed\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.262349 4824 request.go:700] Waited for 1.918316996s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.281338 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctbsw\" (UniqueName: \"kubernetes.io/projected/53344821-2f26-459a-9e42-003f3f1b5a87-kube-api-access-ctbsw\") pod \"machine-api-operator-5694c8668f-kh6hg\" (UID: \"53344821-2f26-459a-9e42-003f3f1b5a87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-kh6hg" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.284807 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmr92\" (UniqueName: \"kubernetes.io/projected/836fad19-b7d1-434c-9fd8-faf3eb1d80d1-kube-api-access-hmr92\") pod \"apiserver-7bbb656c7d-jkghx\" (UID: \"836fad19-b7d1-434c-9fd8-faf3eb1d80d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.305142 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zw9s\" (UniqueName: \"kubernetes.io/projected/2595ad7b-d8cc-46d2-b0db-60e7ce636e9f-kube-api-access-6zw9s\") pod \"authentication-operator-69f744f599-7vdck\" (UID: \"2595ad7b-d8cc-46d2-b0db-60e7ce636e9f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.330177 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw5js\" (UniqueName: \"kubernetes.io/projected/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-kube-api-access-nw5js\") pod \"route-controller-manager-6576b87f9c-9k27r\" (UID: \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.340545 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f4fs\" (UniqueName: \"kubernetes.io/projected/d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110-kube-api-access-4f4fs\") pod \"etcd-operator-b45778765-pvqdd\" (UID: \"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.346200 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.356751 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.368205 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj72b\" (UniqueName: \"kubernetes.io/projected/581e69ae-c21a-4a9e-b1ea-9c38256d7b30-kube-api-access-tj72b\") pod \"downloads-7954f5f757-r4c4b\" (UID: \"581e69ae-c21a-4a9e-b1ea-9c38256d7b30\") " pod="openshift-console/downloads-7954f5f757-r4c4b" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.368536 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.370809 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.381920 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-r4c4b" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.386962 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqjj7\" (UniqueName: \"kubernetes.io/projected/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-kube-api-access-sqjj7\") pod \"controller-manager-879f6c89f-jm7qk\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.403183 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85fxs\" (UniqueName: \"kubernetes.io/projected/13bff804-f118-473b-a547-433aed671b46-kube-api-access-85fxs\") pod \"control-plane-machine-set-operator-78cbb6b69f-q8hvw\" (UID: \"13bff804-f118-473b-a547-433aed671b46\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q8hvw" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.433580 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55rl5\" (UniqueName: \"kubernetes.io/projected/8118fe3c-1479-4634-9b64-9350991d909d-kube-api-access-55rl5\") pod \"dns-operator-744455d44c-h7djl\" (UID: \"8118fe3c-1479-4634-9b64-9350991d909d\") " pod="openshift-dns-operator/dns-operator-744455d44c-h7djl" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.442344 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk6p9\" (UniqueName: \"kubernetes.io/projected/250422bb-6e8f-4622-a456-ded5825e7c86-kube-api-access-sk6p9\") pod \"openshift-controller-manager-operator-756b6f6bc6-lwm9v\" (UID: \"250422bb-6e8f-4622-a456-ded5825e7c86\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwm9v" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.462855 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2tmt\" (UniqueName: \"kubernetes.io/projected/b14c3eec-796c-48b0-b4fe-67cb327f2de7-kube-api-access-c2tmt\") pod \"machine-config-controller-84d6567774-ksmk7\" (UID: \"b14c3eec-796c-48b0-b4fe-67cb327f2de7\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ksmk7" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.482658 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4dl5\" (UniqueName: \"kubernetes.io/projected/6f8699c7-58f5-4a80-b5af-5403cb178676-kube-api-access-z4dl5\") pod \"oauth-openshift-558db77b4-jf5jw\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.503676 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.513811 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.514297 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgctm\" (UniqueName: \"kubernetes.io/projected/33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4-kube-api-access-vgctm\") pod \"machine-config-operator-74547568cd-hl2rv\" (UID: \"33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.525940 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.530443 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-kh6hg" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.579987 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.595042 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7jnm"] Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.605694 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f66ddecd-538b-48bd-a335-e7f99181daa0-node-pullsecrets\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.605744 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4349e271-7758-4dcf-9053-fbc984436a8b-auth-proxy-config\") pod \"machine-approver-56656f9798-vgr74\" (UID: \"4349e271-7758-4dcf-9053-fbc984436a8b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.605761 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4349e271-7758-4dcf-9053-fbc984436a8b-machine-approver-tls\") pod \"machine-approver-56656f9798-vgr74\" (UID: \"4349e271-7758-4dcf-9053-fbc984436a8b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.605790 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dxf6\" (UniqueName: \"kubernetes.io/projected/5b0ff99f-1e04-4e23-895a-a02a303c8daa-kube-api-access-5dxf6\") pod \"router-default-5444994796-fp4wq\" (UID: \"5b0ff99f-1e04-4e23-895a-a02a303c8daa\") " pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.605840 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d3de326-8359-4a7c-84da-57a071a929d7-config\") pod \"kube-apiserver-operator-766d6c64bb-b2k2f\" (UID: \"5d3de326-8359-4a7c-84da-57a071a929d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2k2f" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.605868 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac86b042-947d-402f-a7c8-bb0a69d3f86e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cxlfh\" (UID: \"ac86b042-947d-402f-a7c8-bb0a69d3f86e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cxlfh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.605915 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4a8a6d0-c052-4bca-9be8-dddd6d2ef017-srv-cert\") pod \"catalog-operator-68c6474976-rvv9j\" (UID: \"a4a8a6d0-c052-4bca-9be8-dddd6d2ef017\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.605929 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a4a8a6d0-c052-4bca-9be8-dddd6d2ef017-profile-collector-cert\") pod \"catalog-operator-68c6474976-rvv9j\" (UID: \"a4a8a6d0-c052-4bca-9be8-dddd6d2ef017\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.605945 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb795a58-0029-416c-84fa-ae83cf338858-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8dlbn\" (UID: \"bb795a58-0029-416c-84fa-ae83cf338858\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8dlbn" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.605976 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606005 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9016587d-3cd5-46d7-bd50-586cd32933f7-registry-certificates\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606029 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb795a58-0029-416c-84fa-ae83cf338858-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8dlbn\" (UID: \"bb795a58-0029-416c-84fa-ae83cf338858\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8dlbn" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606045 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7dh8\" (UniqueName: \"kubernetes.io/projected/2f39cab4-77fc-4641-9e84-c01b0dedc300-kube-api-access-r7dh8\") pod \"console-operator-58897d9998-p5tqf\" (UID: \"2f39cab4-77fc-4641-9e84-c01b0dedc300\") " pod="openshift-console-operator/console-operator-58897d9998-p5tqf" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606062 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f66ddecd-538b-48bd-a335-e7f99181daa0-encryption-config\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606109 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9016587d-3cd5-46d7-bd50-586cd32933f7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606127 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9016587d-3cd5-46d7-bd50-586cd32933f7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606142 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f66ddecd-538b-48bd-a335-e7f99181daa0-serving-cert\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606161 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdkzl\" (UniqueName: \"kubernetes.io/projected/9016587d-3cd5-46d7-bd50-586cd32933f7-kube-api-access-cdkzl\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606191 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f09bc4be-bc94-4c63-93ec-4bc2fef07d1b-serviceca\") pod \"image-pruner-29531520-969xh\" (UID: \"f09bc4be-bc94-4c63-93ec-4bc2fef07d1b\") " pod="openshift-image-registry/image-pruner-29531520-969xh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606208 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5b0ff99f-1e04-4e23-895a-a02a303c8daa-stats-auth\") pod \"router-default-5444994796-fp4wq\" (UID: \"5b0ff99f-1e04-4e23-895a-a02a303c8daa\") " pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606224 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b0ff99f-1e04-4e23-895a-a02a303c8daa-metrics-certs\") pod \"router-default-5444994796-fp4wq\" (UID: \"5b0ff99f-1e04-4e23-895a-a02a303c8daa\") " pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606252 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w68sp\" (UniqueName: \"kubernetes.io/projected/f66ddecd-538b-48bd-a335-e7f99181daa0-kube-api-access-w68sp\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606266 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d3de326-8359-4a7c-84da-57a071a929d7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-b2k2f\" (UID: \"5d3de326-8359-4a7c-84da-57a071a929d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2k2f" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606288 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp68j\" (UniqueName: \"kubernetes.io/projected/44376c4d-d433-41a2-bdc1-22a9792e7640-kube-api-access-lp68j\") pod \"cluster-samples-operator-665b6dd947-tjdhm\" (UID: \"44376c4d-d433-41a2-bdc1-22a9792e7640\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjdhm" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606316 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f39cab4-77fc-4641-9e84-c01b0dedc300-trusted-ca\") pod \"console-operator-58897d9998-p5tqf\" (UID: \"2f39cab4-77fc-4641-9e84-c01b0dedc300\") " pod="openshift-console-operator/console-operator-58897d9998-p5tqf" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606333 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d3de326-8359-4a7c-84da-57a071a929d7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-b2k2f\" (UID: \"5d3de326-8359-4a7c-84da-57a071a929d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2k2f" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606360 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb795a58-0029-416c-84fa-ae83cf338858-config\") pod \"kube-controller-manager-operator-78b949d7b-8dlbn\" (UID: \"bb795a58-0029-416c-84fa-ae83cf338858\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8dlbn" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606390 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9016587d-3cd5-46d7-bd50-586cd32933f7-registry-tls\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606408 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9016587d-3cd5-46d7-bd50-586cd32933f7-trusted-ca\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606445 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb7ql\" (UniqueName: \"kubernetes.io/projected/ac86b042-947d-402f-a7c8-bb0a69d3f86e-kube-api-access-fb7ql\") pod \"multus-admission-controller-857f4d67dd-cxlfh\" (UID: \"ac86b042-947d-402f-a7c8-bb0a69d3f86e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cxlfh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606478 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f66ddecd-538b-48bd-a335-e7f99181daa0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606632 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f66ddecd-538b-48bd-a335-e7f99181daa0-audit\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606658 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f66ddecd-538b-48bd-a335-e7f99181daa0-etcd-serving-ca\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606678 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f39cab4-77fc-4641-9e84-c01b0dedc300-config\") pod \"console-operator-58897d9998-p5tqf\" (UID: \"2f39cab4-77fc-4641-9e84-c01b0dedc300\") " pod="openshift-console-operator/console-operator-58897d9998-p5tqf" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606698 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc987\" (UniqueName: \"kubernetes.io/projected/f09bc4be-bc94-4c63-93ec-4bc2fef07d1b-kube-api-access-qc987\") pod \"image-pruner-29531520-969xh\" (UID: \"f09bc4be-bc94-4c63-93ec-4bc2fef07d1b\") " pod="openshift-image-registry/image-pruner-29531520-969xh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606716 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b0ff99f-1e04-4e23-895a-a02a303c8daa-service-ca-bundle\") pod \"router-default-5444994796-fp4wq\" (UID: \"5b0ff99f-1e04-4e23-895a-a02a303c8daa\") " pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606734 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f66ddecd-538b-48bd-a335-e7f99181daa0-etcd-client\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606751 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f66ddecd-538b-48bd-a335-e7f99181daa0-audit-dir\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606772 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f66ddecd-538b-48bd-a335-e7f99181daa0-image-import-ca\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606789 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5b0ff99f-1e04-4e23-895a-a02a303c8daa-default-certificate\") pod \"router-default-5444994796-fp4wq\" (UID: \"5b0ff99f-1e04-4e23-895a-a02a303c8daa\") " pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606829 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9016587d-3cd5-46d7-bd50-586cd32933f7-bound-sa-token\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606844 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w65w\" (UniqueName: \"kubernetes.io/projected/a4a8a6d0-c052-4bca-9be8-dddd6d2ef017-kube-api-access-5w65w\") pod \"catalog-operator-68c6474976-rvv9j\" (UID: \"a4a8a6d0-c052-4bca-9be8-dddd6d2ef017\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606865 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/44376c4d-d433-41a2-bdc1-22a9792e7640-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tjdhm\" (UID: \"44376c4d-d433-41a2-bdc1-22a9792e7640\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjdhm" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606886 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66ddecd-538b-48bd-a335-e7f99181daa0-config\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606922 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4349e271-7758-4dcf-9053-fbc984436a8b-config\") pod \"machine-approver-56656f9798-vgr74\" (UID: \"4349e271-7758-4dcf-9053-fbc984436a8b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606939 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f39cab4-77fc-4641-9e84-c01b0dedc300-serving-cert\") pod \"console-operator-58897d9998-p5tqf\" (UID: \"2f39cab4-77fc-4641-9e84-c01b0dedc300\") " pod="openshift-console-operator/console-operator-58897d9998-p5tqf" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.606953 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjv8j\" (UniqueName: \"kubernetes.io/projected/4349e271-7758-4dcf-9053-fbc984436a8b-kube-api-access-mjv8j\") pod \"machine-approver-56656f9798-vgr74\" (UID: \"4349e271-7758-4dcf-9053-fbc984436a8b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74" Feb 24 00:09:26 crc kubenswrapper[4824]: E0224 00:09:26.607967 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:27.107944942 +0000 UTC m=+231.097569411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.608428 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ksmk7" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.615942 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q8hvw" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.691984 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-h7djl" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.707948 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:26 crc kubenswrapper[4824]: E0224 00:09:26.708169 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:27.208124835 +0000 UTC m=+231.197749304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708229 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d3de326-8359-4a7c-84da-57a071a929d7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-b2k2f\" (UID: \"5d3de326-8359-4a7c-84da-57a071a929d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2k2f" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708286 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb795a58-0029-416c-84fa-ae83cf338858-config\") pod \"kube-controller-manager-operator-78b949d7b-8dlbn\" (UID: \"bb795a58-0029-416c-84fa-ae83cf338858\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8dlbn" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708312 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e312a49f-dc7a-49fc-9baf-3105fec587ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-99tkw\" (UID: \"e312a49f-dc7a-49fc-9baf-3105fec587ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708367 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp68j\" (UniqueName: \"kubernetes.io/projected/44376c4d-d433-41a2-bdc1-22a9792e7640-kube-api-access-lp68j\") pod \"cluster-samples-operator-665b6dd947-tjdhm\" (UID: \"44376c4d-d433-41a2-bdc1-22a9792e7640\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjdhm" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708412 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/757dc1d0-9507-4470-8496-9162b8999465-node-bootstrap-token\") pod \"machine-config-server-vvlvv\" (UID: \"757dc1d0-9507-4470-8496-9162b8999465\") " pod="openshift-machine-config-operator/machine-config-server-vvlvv" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708436 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f66ddecd-538b-48bd-a335-e7f99181daa0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708456 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdcnv\" (UniqueName: \"kubernetes.io/projected/a7389587-c14d-45bd-b642-4ba3b5d7ac41-kube-api-access-xdcnv\") pod \"kube-storage-version-migrator-operator-b67b599dd-v9bck\" (UID: \"a7389587-c14d-45bd-b642-4ba3b5d7ac41\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9bck" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708510 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/54cdfa0a-fdb0-4509-9d56-01194a25ee63-metrics-tls\") pod \"ingress-operator-5b745b69d9-ghwq4\" (UID: \"54cdfa0a-fdb0-4509-9d56-01194a25ee63\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708587 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f66ddecd-538b-48bd-a335-e7f99181daa0-etcd-serving-ca\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708661 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b0ff99f-1e04-4e23-895a-a02a303c8daa-service-ca-bundle\") pod \"router-default-5444994796-fp4wq\" (UID: \"5b0ff99f-1e04-4e23-895a-a02a303c8daa\") " pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708702 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-console-config\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708724 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cca2c2a-43ba-4b84-b10b-25053c6d7350-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-w6pjq\" (UID: \"4cca2c2a-43ba-4b84-b10b-25053c6d7350\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708744 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-console-serving-cert\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708783 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7389587-c14d-45bd-b642-4ba3b5d7ac41-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-v9bck\" (UID: \"a7389587-c14d-45bd-b642-4ba3b5d7ac41\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9bck" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708805 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f66ddecd-538b-48bd-a335-e7f99181daa0-image-import-ca\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708827 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5b0ff99f-1e04-4e23-895a-a02a303c8daa-default-certificate\") pod \"router-default-5444994796-fp4wq\" (UID: \"5b0ff99f-1e04-4e23-895a-a02a303c8daa\") " pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708865 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnxwh\" (UniqueName: \"kubernetes.io/projected/42d75b69-be96-43de-8687-444a81d8ebd5-kube-api-access-bnxwh\") pod \"csi-hostpathplugin-dq9gz\" (UID: \"42d75b69-be96-43de-8687-444a81d8ebd5\") " pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708902 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66ddecd-538b-48bd-a335-e7f99181daa0-config\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708941 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2e0a401-0fd7-499c-ac31-fc8cb0a64366-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2jqvq\" (UID: \"a2e0a401-0fd7-499c-ac31-fc8cb0a64366\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2jqvq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.708980 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f39cab4-77fc-4641-9e84-c01b0dedc300-serving-cert\") pod \"console-operator-58897d9998-p5tqf\" (UID: \"2f39cab4-77fc-4641-9e84-c01b0dedc300\") " pod="openshift-console-operator/console-operator-58897d9998-p5tqf" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709026 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c15a2653-454b-42e4-85b5-87b99cc30198-srv-cert\") pod \"olm-operator-6b444d44fb-xs7nb\" (UID: \"c15a2653-454b-42e4-85b5-87b99cc30198\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709062 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4349e271-7758-4dcf-9053-fbc984436a8b-machine-approver-tls\") pod \"machine-approver-56656f9798-vgr74\" (UID: \"4349e271-7758-4dcf-9053-fbc984436a8b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709107 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fc6c8d25-266d-4f40-9b7c-e1697b87db51-tmpfs\") pod \"packageserver-d55dfcdfc-d95c9\" (UID: \"fc6c8d25-266d-4f40-9b7c-e1697b87db51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709140 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7389587-c14d-45bd-b642-4ba3b5d7ac41-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-v9bck\" (UID: \"a7389587-c14d-45bd-b642-4ba3b5d7ac41\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9bck" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709189 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d3de326-8359-4a7c-84da-57a071a929d7-config\") pod \"kube-apiserver-operator-766d6c64bb-b2k2f\" (UID: \"5d3de326-8359-4a7c-84da-57a071a929d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2k2f" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709207 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/511cd2d5-0160-44f2-adf1-acbe5c8c28cf-metrics-tls\") pod \"dns-default-5n768\" (UID: \"511cd2d5-0160-44f2-adf1-acbe5c8c28cf\") " pod="openshift-dns/dns-default-5n768" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709224 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4thqj\" (UniqueName: \"kubernetes.io/projected/c15a2653-454b-42e4-85b5-87b99cc30198-kube-api-access-4thqj\") pod \"olm-operator-6b444d44fb-xs7nb\" (UID: \"c15a2653-454b-42e4-85b5-87b99cc30198\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709266 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb795a58-0029-416c-84fa-ae83cf338858-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8dlbn\" (UID: \"bb795a58-0029-416c-84fa-ae83cf338858\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8dlbn" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709292 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc6c8d25-266d-4f40-9b7c-e1697b87db51-webhook-cert\") pod \"packageserver-d55dfcdfc-d95c9\" (UID: \"fc6c8d25-266d-4f40-9b7c-e1697b87db51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709347 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a4a8a6d0-c052-4bca-9be8-dddd6d2ef017-profile-collector-cert\") pod \"catalog-operator-68c6474976-rvv9j\" (UID: \"a4a8a6d0-c052-4bca-9be8-dddd6d2ef017\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709413 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/511cd2d5-0160-44f2-adf1-acbe5c8c28cf-config-volume\") pod \"dns-default-5n768\" (UID: \"511cd2d5-0160-44f2-adf1-acbe5c8c28cf\") " pod="openshift-dns/dns-default-5n768" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709430 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hgp7\" (UniqueName: \"kubernetes.io/projected/cb0a4d10-0131-49ec-97f5-e77f1f222cdd-kube-api-access-2hgp7\") pod \"service-ca-9c57cc56f-8krrp\" (UID: \"cb0a4d10-0131-49ec-97f5-e77f1f222cdd\") " pod="openshift-service-ca/service-ca-9c57cc56f-8krrp" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709510 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb795a58-0029-416c-84fa-ae83cf338858-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8dlbn\" (UID: \"bb795a58-0029-416c-84fa-ae83cf338858\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8dlbn" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709563 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f66ddecd-538b-48bd-a335-e7f99181daa0-encryption-config\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709625 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9016587d-3cd5-46d7-bd50-586cd32933f7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709656 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvwdb\" (UniqueName: \"kubernetes.io/projected/a2e0a401-0fd7-499c-ac31-fc8cb0a64366-kube-api-access-hvwdb\") pod \"package-server-manager-789f6589d5-2jqvq\" (UID: \"a2e0a401-0fd7-499c-ac31-fc8cb0a64366\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2jqvq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709679 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cca2c2a-43ba-4b84-b10b-25053c6d7350-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-w6pjq\" (UID: \"4cca2c2a-43ba-4b84-b10b-25053c6d7350\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709706 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f66ddecd-538b-48bd-a335-e7f99181daa0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709746 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e312a49f-dc7a-49fc-9baf-3105fec587ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-99tkw\" (UID: \"e312a49f-dc7a-49fc-9baf-3105fec587ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709833 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/42d75b69-be96-43de-8687-444a81d8ebd5-mountpoint-dir\") pod \"csi-hostpathplugin-dq9gz\" (UID: \"42d75b69-be96-43de-8687-444a81d8ebd5\") " pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709893 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f09bc4be-bc94-4c63-93ec-4bc2fef07d1b-serviceca\") pod \"image-pruner-29531520-969xh\" (UID: \"f09bc4be-bc94-4c63-93ec-4bc2fef07d1b\") " pod="openshift-image-registry/image-pruner-29531520-969xh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709918 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5b0ff99f-1e04-4e23-895a-a02a303c8daa-stats-auth\") pod \"router-default-5444994796-fp4wq\" (UID: \"5b0ff99f-1e04-4e23-895a-a02a303c8daa\") " pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709969 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cb0a4d10-0131-49ec-97f5-e77f1f222cdd-signing-key\") pod \"service-ca-9c57cc56f-8krrp\" (UID: \"cb0a4d10-0131-49ec-97f5-e77f1f222cdd\") " pod="openshift-service-ca/service-ca-9c57cc56f-8krrp" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.709988 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w68sp\" (UniqueName: \"kubernetes.io/projected/f66ddecd-538b-48bd-a335-e7f99181daa0-kube-api-access-w68sp\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710049 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cca2c2a-43ba-4b84-b10b-25053c6d7350-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-w6pjq\" (UID: \"4cca2c2a-43ba-4b84-b10b-25053c6d7350\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710076 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-service-ca\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710164 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f39cab4-77fc-4641-9e84-c01b0dedc300-trusted-ca\") pod \"console-operator-58897d9998-p5tqf\" (UID: \"2f39cab4-77fc-4641-9e84-c01b0dedc300\") " pod="openshift-console-operator/console-operator-58897d9998-p5tqf" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710207 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d3de326-8359-4a7c-84da-57a071a929d7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-b2k2f\" (UID: \"5d3de326-8359-4a7c-84da-57a071a929d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2k2f" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710238 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/239fc97c-cb5a-4fa1-965e-7b64c90268ce-config-volume\") pod \"collect-profiles-29531520-xxvzq\" (UID: \"239fc97c-cb5a-4fa1-965e-7b64c90268ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710277 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9016587d-3cd5-46d7-bd50-586cd32933f7-registry-tls\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710298 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9016587d-3cd5-46d7-bd50-586cd32933f7-trusted-ca\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710317 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb7ql\" (UniqueName: \"kubernetes.io/projected/ac86b042-947d-402f-a7c8-bb0a69d3f86e-kube-api-access-fb7ql\") pod \"multus-admission-controller-857f4d67dd-cxlfh\" (UID: \"ac86b042-947d-402f-a7c8-bb0a69d3f86e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cxlfh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710371 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f66ddecd-538b-48bd-a335-e7f99181daa0-audit\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710392 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cb0a4d10-0131-49ec-97f5-e77f1f222cdd-signing-cabundle\") pod \"service-ca-9c57cc56f-8krrp\" (UID: \"cb0a4d10-0131-49ec-97f5-e77f1f222cdd\") " pod="openshift-service-ca/service-ca-9c57cc56f-8krrp" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710430 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54cdfa0a-fdb0-4509-9d56-01194a25ee63-trusted-ca\") pod \"ingress-operator-5b745b69d9-ghwq4\" (UID: \"54cdfa0a-fdb0-4509-9d56-01194a25ee63\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710449 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v9d5\" (UniqueName: \"kubernetes.io/projected/4cca2c2a-43ba-4b84-b10b-25053c6d7350-kube-api-access-7v9d5\") pod \"cluster-image-registry-operator-dc59b4c8b-w6pjq\" (UID: \"4cca2c2a-43ba-4b84-b10b-25053c6d7350\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710468 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f39cab4-77fc-4641-9e84-c01b0dedc300-config\") pod \"console-operator-58897d9998-p5tqf\" (UID: \"2f39cab4-77fc-4641-9e84-c01b0dedc300\") " pod="openshift-console-operator/console-operator-58897d9998-p5tqf" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710484 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc987\" (UniqueName: \"kubernetes.io/projected/f09bc4be-bc94-4c63-93ec-4bc2fef07d1b-kube-api-access-qc987\") pod \"image-pruner-29531520-969xh\" (UID: \"f09bc4be-bc94-4c63-93ec-4bc2fef07d1b\") " pod="openshift-image-registry/image-pruner-29531520-969xh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710535 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c15a2653-454b-42e4-85b5-87b99cc30198-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xs7nb\" (UID: \"c15a2653-454b-42e4-85b5-87b99cc30198\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710558 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-oauth-serving-cert\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710617 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f66ddecd-538b-48bd-a335-e7f99181daa0-etcd-client\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710637 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f66ddecd-538b-48bd-a335-e7f99181daa0-audit-dir\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710700 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3e3160a-60c7-424c-b5a3-53841213467d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c94hb\" (UID: \"c3e3160a-60c7-424c-b5a3-53841213467d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c94hb" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710729 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9016587d-3cd5-46d7-bd50-586cd32933f7-bound-sa-token\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710780 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w65w\" (UniqueName: \"kubernetes.io/projected/a4a8a6d0-c052-4bca-9be8-dddd6d2ef017-kube-api-access-5w65w\") pod \"catalog-operator-68c6474976-rvv9j\" (UID: \"a4a8a6d0-c052-4bca-9be8-dddd6d2ef017\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710803 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54cdfa0a-fdb0-4509-9d56-01194a25ee63-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ghwq4\" (UID: \"54cdfa0a-fdb0-4509-9d56-01194a25ee63\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710856 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/44376c4d-d433-41a2-bdc1-22a9792e7640-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tjdhm\" (UID: \"44376c4d-d433-41a2-bdc1-22a9792e7640\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjdhm" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710915 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb795a58-0029-416c-84fa-ae83cf338858-config\") pod \"kube-controller-manager-operator-78b949d7b-8dlbn\" (UID: \"bb795a58-0029-416c-84fa-ae83cf338858\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8dlbn" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710920 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4349e271-7758-4dcf-9053-fbc984436a8b-config\") pod \"machine-approver-56656f9798-vgr74\" (UID: \"4349e271-7758-4dcf-9053-fbc984436a8b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.710972 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f76k\" (UniqueName: \"kubernetes.io/projected/e312a49f-dc7a-49fc-9baf-3105fec587ae-kube-api-access-6f76k\") pod \"marketplace-operator-79b997595-99tkw\" (UID: \"e312a49f-dc7a-49fc-9baf-3105fec587ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711018 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjv8j\" (UniqueName: \"kubernetes.io/projected/4349e271-7758-4dcf-9053-fbc984436a8b-kube-api-access-mjv8j\") pod \"machine-approver-56656f9798-vgr74\" (UID: \"4349e271-7758-4dcf-9053-fbc984436a8b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711039 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6zzs\" (UniqueName: \"kubernetes.io/projected/67b600b4-d056-4b5f-b75e-0502de432461-kube-api-access-h6zzs\") pod \"service-ca-operator-777779d784-6h296\" (UID: \"67b600b4-d056-4b5f-b75e-0502de432461\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6h296" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711063 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/239fc97c-cb5a-4fa1-965e-7b64c90268ce-secret-volume\") pod \"collect-profiles-29531520-xxvzq\" (UID: \"239fc97c-cb5a-4fa1-965e-7b64c90268ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711093 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f66ddecd-538b-48bd-a335-e7f99181daa0-node-pullsecrets\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711112 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4349e271-7758-4dcf-9053-fbc984436a8b-auth-proxy-config\") pod \"machine-approver-56656f9798-vgr74\" (UID: \"4349e271-7758-4dcf-9053-fbc984436a8b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711136 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dxf6\" (UniqueName: \"kubernetes.io/projected/5b0ff99f-1e04-4e23-895a-a02a303c8daa-kube-api-access-5dxf6\") pod \"router-default-5444994796-fp4wq\" (UID: \"5b0ff99f-1e04-4e23-895a-a02a303c8daa\") " pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711160 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3e3160a-60c7-424c-b5a3-53841213467d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c94hb\" (UID: \"c3e3160a-60c7-424c-b5a3-53841213467d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c94hb" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711178 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-trusted-ca-bundle\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711219 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/42d75b69-be96-43de-8687-444a81d8ebd5-registration-dir\") pod \"csi-hostpathplugin-dq9gz\" (UID: \"42d75b69-be96-43de-8687-444a81d8ebd5\") " pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711249 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc6c8d25-266d-4f40-9b7c-e1697b87db51-apiservice-cert\") pod \"packageserver-d55dfcdfc-d95c9\" (UID: \"fc6c8d25-266d-4f40-9b7c-e1697b87db51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711291 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67b600b4-d056-4b5f-b75e-0502de432461-serving-cert\") pod \"service-ca-operator-777779d784-6h296\" (UID: \"67b600b4-d056-4b5f-b75e-0502de432461\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6h296" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711313 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac86b042-947d-402f-a7c8-bb0a69d3f86e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cxlfh\" (UID: \"ac86b042-947d-402f-a7c8-bb0a69d3f86e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cxlfh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711330 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzbm6\" (UniqueName: \"kubernetes.io/projected/782d3fe9-7b5b-4d44-a3e6-0efea9d617ea-kube-api-access-bzbm6\") pod \"ingress-canary-4tvd9\" (UID: \"782d3fe9-7b5b-4d44-a3e6-0efea9d617ea\") " pod="openshift-ingress-canary/ingress-canary-4tvd9" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711349 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3e3160a-60c7-424c-b5a3-53841213467d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c94hb\" (UID: \"c3e3160a-60c7-424c-b5a3-53841213467d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c94hb" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711367 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67b600b4-d056-4b5f-b75e-0502de432461-config\") pod \"service-ca-operator-777779d784-6h296\" (UID: \"67b600b4-d056-4b5f-b75e-0502de432461\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6h296" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711408 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4a8a6d0-c052-4bca-9be8-dddd6d2ef017-srv-cert\") pod \"catalog-operator-68c6474976-rvv9j\" (UID: \"a4a8a6d0-c052-4bca-9be8-dddd6d2ef017\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711430 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/782d3fe9-7b5b-4d44-a3e6-0efea9d617ea-cert\") pod \"ingress-canary-4tvd9\" (UID: \"782d3fe9-7b5b-4d44-a3e6-0efea9d617ea\") " pod="openshift-ingress-canary/ingress-canary-4tvd9" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711449 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-console-oauth-config\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711469 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ddjs\" (UniqueName: \"kubernetes.io/projected/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-kube-api-access-2ddjs\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711497 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711540 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/42d75b69-be96-43de-8687-444a81d8ebd5-csi-data-dir\") pod \"csi-hostpathplugin-dq9gz\" (UID: \"42d75b69-be96-43de-8687-444a81d8ebd5\") " pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711562 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb8wj\" (UniqueName: \"kubernetes.io/projected/511cd2d5-0160-44f2-adf1-acbe5c8c28cf-kube-api-access-qb8wj\") pod \"dns-default-5n768\" (UID: \"511cd2d5-0160-44f2-adf1-acbe5c8c28cf\") " pod="openshift-dns/dns-default-5n768" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711581 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/757dc1d0-9507-4470-8496-9162b8999465-certs\") pod \"machine-config-server-vvlvv\" (UID: \"757dc1d0-9507-4470-8496-9162b8999465\") " pod="openshift-machine-config-operator/machine-config-server-vvlvv" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711722 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9016587d-3cd5-46d7-bd50-586cd32933f7-registry-certificates\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711744 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/42d75b69-be96-43de-8687-444a81d8ebd5-plugins-dir\") pod \"csi-hostpathplugin-dq9gz\" (UID: \"42d75b69-be96-43de-8687-444a81d8ebd5\") " pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711772 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfr7v\" (UniqueName: \"kubernetes.io/projected/ac257861-33c1-4e92-9d58-bb7351f6316e-kube-api-access-gfr7v\") pod \"migrator-59844c95c7-5ltfz\" (UID: \"ac257861-33c1-4e92-9d58-bb7351f6316e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5ltfz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711796 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dh8\" (UniqueName: \"kubernetes.io/projected/2f39cab4-77fc-4641-9e84-c01b0dedc300-kube-api-access-r7dh8\") pod \"console-operator-58897d9998-p5tqf\" (UID: \"2f39cab4-77fc-4641-9e84-c01b0dedc300\") " pod="openshift-console-operator/console-operator-58897d9998-p5tqf" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711817 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9016587d-3cd5-46d7-bd50-586cd32933f7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711836 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/42d75b69-be96-43de-8687-444a81d8ebd5-socket-dir\") pod \"csi-hostpathplugin-dq9gz\" (UID: \"42d75b69-be96-43de-8687-444a81d8ebd5\") " pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711871 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f66ddecd-538b-48bd-a335-e7f99181daa0-serving-cert\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711890 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdkzl\" (UniqueName: \"kubernetes.io/projected/9016587d-3cd5-46d7-bd50-586cd32933f7-kube-api-access-cdkzl\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711915 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr4rm\" (UniqueName: \"kubernetes.io/projected/54cdfa0a-fdb0-4509-9d56-01194a25ee63-kube-api-access-sr4rm\") pod \"ingress-operator-5b745b69d9-ghwq4\" (UID: \"54cdfa0a-fdb0-4509-9d56-01194a25ee63\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711933 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvmgf\" (UniqueName: \"kubernetes.io/projected/757dc1d0-9507-4470-8496-9162b8999465-kube-api-access-xvmgf\") pod \"machine-config-server-vvlvv\" (UID: \"757dc1d0-9507-4470-8496-9162b8999465\") " pod="openshift-machine-config-operator/machine-config-server-vvlvv" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711961 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9glzv\" (UniqueName: \"kubernetes.io/projected/239fc97c-cb5a-4fa1-965e-7b64c90268ce-kube-api-access-9glzv\") pod \"collect-profiles-29531520-xxvzq\" (UID: \"239fc97c-cb5a-4fa1-965e-7b64c90268ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711978 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsdfv\" (UniqueName: \"kubernetes.io/projected/fc6c8d25-266d-4f40-9b7c-e1697b87db51-kube-api-access-gsdfv\") pod \"packageserver-d55dfcdfc-d95c9\" (UID: \"fc6c8d25-266d-4f40-9b7c-e1697b87db51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.711998 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b0ff99f-1e04-4e23-895a-a02a303c8daa-metrics-certs\") pod \"router-default-5444994796-fp4wq\" (UID: \"5b0ff99f-1e04-4e23-895a-a02a303c8daa\") " pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.714607 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f66ddecd-538b-48bd-a335-e7f99181daa0-etcd-serving-ca\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.714933 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b0ff99f-1e04-4e23-895a-a02a303c8daa-service-ca-bundle\") pod \"router-default-5444994796-fp4wq\" (UID: \"5b0ff99f-1e04-4e23-895a-a02a303c8daa\") " pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.715542 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f66ddecd-538b-48bd-a335-e7f99181daa0-node-pullsecrets\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.715555 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f09bc4be-bc94-4c63-93ec-4bc2fef07d1b-serviceca\") pod \"image-pruner-29531520-969xh\" (UID: \"f09bc4be-bc94-4c63-93ec-4bc2fef07d1b\") " pod="openshift-image-registry/image-pruner-29531520-969xh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.716177 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d3de326-8359-4a7c-84da-57a071a929d7-config\") pod \"kube-apiserver-operator-766d6c64bb-b2k2f\" (UID: \"5d3de326-8359-4a7c-84da-57a071a929d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2k2f" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.716790 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4349e271-7758-4dcf-9053-fbc984436a8b-auth-proxy-config\") pod \"machine-approver-56656f9798-vgr74\" (UID: \"4349e271-7758-4dcf-9053-fbc984436a8b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.717006 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f66ddecd-538b-48bd-a335-e7f99181daa0-image-import-ca\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.717238 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwm9v" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.719600 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f66ddecd-538b-48bd-a335-e7f99181daa0-audit\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.721346 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d3de326-8359-4a7c-84da-57a071a929d7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-b2k2f\" (UID: \"5d3de326-8359-4a7c-84da-57a071a929d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2k2f" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.722086 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5b0ff99f-1e04-4e23-895a-a02a303c8daa-stats-auth\") pod \"router-default-5444994796-fp4wq\" (UID: \"5b0ff99f-1e04-4e23-895a-a02a303c8daa\") " pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.722369 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b0ff99f-1e04-4e23-895a-a02a303c8daa-metrics-certs\") pod \"router-default-5444994796-fp4wq\" (UID: \"5b0ff99f-1e04-4e23-895a-a02a303c8daa\") " pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.722934 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a4a8a6d0-c052-4bca-9be8-dddd6d2ef017-profile-collector-cert\") pod \"catalog-operator-68c6474976-rvv9j\" (UID: \"a4a8a6d0-c052-4bca-9be8-dddd6d2ef017\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.723357 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5b0ff99f-1e04-4e23-895a-a02a303c8daa-default-certificate\") pod \"router-default-5444994796-fp4wq\" (UID: \"5b0ff99f-1e04-4e23-895a-a02a303c8daa\") " pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.724663 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2f39cab4-77fc-4641-9e84-c01b0dedc300-trusted-ca\") pod \"console-operator-58897d9998-p5tqf\" (UID: \"2f39cab4-77fc-4641-9e84-c01b0dedc300\") " pod="openshift-console-operator/console-operator-58897d9998-p5tqf" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.725689 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9016587d-3cd5-46d7-bd50-586cd32933f7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.727113 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9016587d-3cd5-46d7-bd50-586cd32933f7-registry-tls\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.727589 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f66ddecd-538b-48bd-a335-e7f99181daa0-config\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.728332 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f66ddecd-538b-48bd-a335-e7f99181daa0-audit-dir\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.730164 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4349e271-7758-4dcf-9053-fbc984436a8b-config\") pod \"machine-approver-56656f9798-vgr74\" (UID: \"4349e271-7758-4dcf-9053-fbc984436a8b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74" Feb 24 00:09:26 crc kubenswrapper[4824]: E0224 00:09:26.730841 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:27.230812189 +0000 UTC m=+231.220436658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.731448 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb795a58-0029-416c-84fa-ae83cf338858-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8dlbn\" (UID: \"bb795a58-0029-416c-84fa-ae83cf338858\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8dlbn" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.731637 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f39cab4-77fc-4641-9e84-c01b0dedc300-config\") pod \"console-operator-58897d9998-p5tqf\" (UID: \"2f39cab4-77fc-4641-9e84-c01b0dedc300\") " pod="openshift-console-operator/console-operator-58897d9998-p5tqf" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.731954 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9016587d-3cd5-46d7-bd50-586cd32933f7-registry-certificates\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.732545 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9016587d-3cd5-46d7-bd50-586cd32933f7-trusted-ca\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.734171 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/44376c4d-d433-41a2-bdc1-22a9792e7640-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tjdhm\" (UID: \"44376c4d-d433-41a2-bdc1-22a9792e7640\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjdhm" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.737205 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f66ddecd-538b-48bd-a335-e7f99181daa0-etcd-client\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.742886 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f39cab4-77fc-4641-9e84-c01b0dedc300-serving-cert\") pod \"console-operator-58897d9998-p5tqf\" (UID: \"2f39cab4-77fc-4641-9e84-c01b0dedc300\") " pod="openshift-console-operator/console-operator-58897d9998-p5tqf" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.743883 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9016587d-3cd5-46d7-bd50-586cd32933f7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.745282 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4a8a6d0-c052-4bca-9be8-dddd6d2ef017-srv-cert\") pod \"catalog-operator-68c6474976-rvv9j\" (UID: \"a4a8a6d0-c052-4bca-9be8-dddd6d2ef017\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.746365 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4349e271-7758-4dcf-9053-fbc984436a8b-machine-approver-tls\") pod \"machine-approver-56656f9798-vgr74\" (UID: \"4349e271-7758-4dcf-9053-fbc984436a8b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.746744 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f66ddecd-538b-48bd-a335-e7f99181daa0-encryption-config\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.747139 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac86b042-947d-402f-a7c8-bb0a69d3f86e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cxlfh\" (UID: \"ac86b042-947d-402f-a7c8-bb0a69d3f86e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cxlfh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.748495 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f66ddecd-538b-48bd-a335-e7f99181daa0-serving-cert\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.763553 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb795a58-0029-416c-84fa-ae83cf338858-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8dlbn\" (UID: \"bb795a58-0029-416c-84fa-ae83cf338858\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8dlbn" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.792561 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjv8j\" (UniqueName: \"kubernetes.io/projected/4349e271-7758-4dcf-9053-fbc984436a8b-kube-api-access-mjv8j\") pod \"machine-approver-56656f9798-vgr74\" (UID: \"4349e271-7758-4dcf-9053-fbc984436a8b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.812957 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813142 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/239fc97c-cb5a-4fa1-965e-7b64c90268ce-secret-volume\") pod \"collect-profiles-29531520-xxvzq\" (UID: \"239fc97c-cb5a-4fa1-965e-7b64c90268ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813175 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3e3160a-60c7-424c-b5a3-53841213467d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c94hb\" (UID: \"c3e3160a-60c7-424c-b5a3-53841213467d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c94hb" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813197 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-trusted-ca-bundle\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813217 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/42d75b69-be96-43de-8687-444a81d8ebd5-registration-dir\") pod \"csi-hostpathplugin-dq9gz\" (UID: \"42d75b69-be96-43de-8687-444a81d8ebd5\") " pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813242 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc6c8d25-266d-4f40-9b7c-e1697b87db51-apiservice-cert\") pod \"packageserver-d55dfcdfc-d95c9\" (UID: \"fc6c8d25-266d-4f40-9b7c-e1697b87db51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813262 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67b600b4-d056-4b5f-b75e-0502de432461-serving-cert\") pod \"service-ca-operator-777779d784-6h296\" (UID: \"67b600b4-d056-4b5f-b75e-0502de432461\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6h296" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813285 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzbm6\" (UniqueName: \"kubernetes.io/projected/782d3fe9-7b5b-4d44-a3e6-0efea9d617ea-kube-api-access-bzbm6\") pod \"ingress-canary-4tvd9\" (UID: \"782d3fe9-7b5b-4d44-a3e6-0efea9d617ea\") " pod="openshift-ingress-canary/ingress-canary-4tvd9" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813301 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3e3160a-60c7-424c-b5a3-53841213467d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c94hb\" (UID: \"c3e3160a-60c7-424c-b5a3-53841213467d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c94hb" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813318 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67b600b4-d056-4b5f-b75e-0502de432461-config\") pod \"service-ca-operator-777779d784-6h296\" (UID: \"67b600b4-d056-4b5f-b75e-0502de432461\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6h296" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813339 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/782d3fe9-7b5b-4d44-a3e6-0efea9d617ea-cert\") pod \"ingress-canary-4tvd9\" (UID: \"782d3fe9-7b5b-4d44-a3e6-0efea9d617ea\") " pod="openshift-ingress-canary/ingress-canary-4tvd9" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813363 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/42d75b69-be96-43de-8687-444a81d8ebd5-csi-data-dir\") pod \"csi-hostpathplugin-dq9gz\" (UID: \"42d75b69-be96-43de-8687-444a81d8ebd5\") " pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813380 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb8wj\" (UniqueName: \"kubernetes.io/projected/511cd2d5-0160-44f2-adf1-acbe5c8c28cf-kube-api-access-qb8wj\") pod \"dns-default-5n768\" (UID: \"511cd2d5-0160-44f2-adf1-acbe5c8c28cf\") " pod="openshift-dns/dns-default-5n768" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813401 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/757dc1d0-9507-4470-8496-9162b8999465-certs\") pod \"machine-config-server-vvlvv\" (UID: \"757dc1d0-9507-4470-8496-9162b8999465\") " pod="openshift-machine-config-operator/machine-config-server-vvlvv" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813417 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-console-oauth-config\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813436 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ddjs\" (UniqueName: \"kubernetes.io/projected/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-kube-api-access-2ddjs\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813454 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/42d75b69-be96-43de-8687-444a81d8ebd5-plugins-dir\") pod \"csi-hostpathplugin-dq9gz\" (UID: \"42d75b69-be96-43de-8687-444a81d8ebd5\") " pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813476 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfr7v\" (UniqueName: \"kubernetes.io/projected/ac257861-33c1-4e92-9d58-bb7351f6316e-kube-api-access-gfr7v\") pod \"migrator-59844c95c7-5ltfz\" (UID: \"ac257861-33c1-4e92-9d58-bb7351f6316e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5ltfz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813500 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/42d75b69-be96-43de-8687-444a81d8ebd5-socket-dir\") pod \"csi-hostpathplugin-dq9gz\" (UID: \"42d75b69-be96-43de-8687-444a81d8ebd5\") " pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813543 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr4rm\" (UniqueName: \"kubernetes.io/projected/54cdfa0a-fdb0-4509-9d56-01194a25ee63-kube-api-access-sr4rm\") pod \"ingress-operator-5b745b69d9-ghwq4\" (UID: \"54cdfa0a-fdb0-4509-9d56-01194a25ee63\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813562 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvmgf\" (UniqueName: \"kubernetes.io/projected/757dc1d0-9507-4470-8496-9162b8999465-kube-api-access-xvmgf\") pod \"machine-config-server-vvlvv\" (UID: \"757dc1d0-9507-4470-8496-9162b8999465\") " pod="openshift-machine-config-operator/machine-config-server-vvlvv" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813587 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9glzv\" (UniqueName: \"kubernetes.io/projected/239fc97c-cb5a-4fa1-965e-7b64c90268ce-kube-api-access-9glzv\") pod \"collect-profiles-29531520-xxvzq\" (UID: \"239fc97c-cb5a-4fa1-965e-7b64c90268ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813605 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsdfv\" (UniqueName: \"kubernetes.io/projected/fc6c8d25-266d-4f40-9b7c-e1697b87db51-kube-api-access-gsdfv\") pod \"packageserver-d55dfcdfc-d95c9\" (UID: \"fc6c8d25-266d-4f40-9b7c-e1697b87db51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813634 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e312a49f-dc7a-49fc-9baf-3105fec587ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-99tkw\" (UID: \"e312a49f-dc7a-49fc-9baf-3105fec587ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813652 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/757dc1d0-9507-4470-8496-9162b8999465-node-bootstrap-token\") pod \"machine-config-server-vvlvv\" (UID: \"757dc1d0-9507-4470-8496-9162b8999465\") " pod="openshift-machine-config-operator/machine-config-server-vvlvv" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813680 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdcnv\" (UniqueName: \"kubernetes.io/projected/a7389587-c14d-45bd-b642-4ba3b5d7ac41-kube-api-access-xdcnv\") pod \"kube-storage-version-migrator-operator-b67b599dd-v9bck\" (UID: \"a7389587-c14d-45bd-b642-4ba3b5d7ac41\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9bck" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813698 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/54cdfa0a-fdb0-4509-9d56-01194a25ee63-metrics-tls\") pod \"ingress-operator-5b745b69d9-ghwq4\" (UID: \"54cdfa0a-fdb0-4509-9d56-01194a25ee63\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813718 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-console-config\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813737 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cca2c2a-43ba-4b84-b10b-25053c6d7350-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-w6pjq\" (UID: \"4cca2c2a-43ba-4b84-b10b-25053c6d7350\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813754 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-console-serving-cert\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813772 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnxwh\" (UniqueName: \"kubernetes.io/projected/42d75b69-be96-43de-8687-444a81d8ebd5-kube-api-access-bnxwh\") pod \"csi-hostpathplugin-dq9gz\" (UID: \"42d75b69-be96-43de-8687-444a81d8ebd5\") " pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813789 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7389587-c14d-45bd-b642-4ba3b5d7ac41-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-v9bck\" (UID: \"a7389587-c14d-45bd-b642-4ba3b5d7ac41\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9bck" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813812 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2e0a401-0fd7-499c-ac31-fc8cb0a64366-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2jqvq\" (UID: \"a2e0a401-0fd7-499c-ac31-fc8cb0a64366\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2jqvq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813832 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c15a2653-454b-42e4-85b5-87b99cc30198-srv-cert\") pod \"olm-operator-6b444d44fb-xs7nb\" (UID: \"c15a2653-454b-42e4-85b5-87b99cc30198\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813851 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fc6c8d25-266d-4f40-9b7c-e1697b87db51-tmpfs\") pod \"packageserver-d55dfcdfc-d95c9\" (UID: \"fc6c8d25-266d-4f40-9b7c-e1697b87db51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813871 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7389587-c14d-45bd-b642-4ba3b5d7ac41-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-v9bck\" (UID: \"a7389587-c14d-45bd-b642-4ba3b5d7ac41\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9bck" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813890 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/511cd2d5-0160-44f2-adf1-acbe5c8c28cf-metrics-tls\") pod \"dns-default-5n768\" (UID: \"511cd2d5-0160-44f2-adf1-acbe5c8c28cf\") " pod="openshift-dns/dns-default-5n768" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813908 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4thqj\" (UniqueName: \"kubernetes.io/projected/c15a2653-454b-42e4-85b5-87b99cc30198-kube-api-access-4thqj\") pod \"olm-operator-6b444d44fb-xs7nb\" (UID: \"c15a2653-454b-42e4-85b5-87b99cc30198\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813933 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc6c8d25-266d-4f40-9b7c-e1697b87db51-webhook-cert\") pod \"packageserver-d55dfcdfc-d95c9\" (UID: \"fc6c8d25-266d-4f40-9b7c-e1697b87db51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813962 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/511cd2d5-0160-44f2-adf1-acbe5c8c28cf-config-volume\") pod \"dns-default-5n768\" (UID: \"511cd2d5-0160-44f2-adf1-acbe5c8c28cf\") " pod="openshift-dns/dns-default-5n768" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.813991 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hgp7\" (UniqueName: \"kubernetes.io/projected/cb0a4d10-0131-49ec-97f5-e77f1f222cdd-kube-api-access-2hgp7\") pod \"service-ca-9c57cc56f-8krrp\" (UID: \"cb0a4d10-0131-49ec-97f5-e77f1f222cdd\") " pod="openshift-service-ca/service-ca-9c57cc56f-8krrp" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.814020 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvwdb\" (UniqueName: \"kubernetes.io/projected/a2e0a401-0fd7-499c-ac31-fc8cb0a64366-kube-api-access-hvwdb\") pod \"package-server-manager-789f6589d5-2jqvq\" (UID: \"a2e0a401-0fd7-499c-ac31-fc8cb0a64366\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2jqvq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.814039 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cca2c2a-43ba-4b84-b10b-25053c6d7350-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-w6pjq\" (UID: \"4cca2c2a-43ba-4b84-b10b-25053c6d7350\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.814060 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e312a49f-dc7a-49fc-9baf-3105fec587ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-99tkw\" (UID: \"e312a49f-dc7a-49fc-9baf-3105fec587ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.814078 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/42d75b69-be96-43de-8687-444a81d8ebd5-mountpoint-dir\") pod \"csi-hostpathplugin-dq9gz\" (UID: \"42d75b69-be96-43de-8687-444a81d8ebd5\") " pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.814095 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cb0a4d10-0131-49ec-97f5-e77f1f222cdd-signing-key\") pod \"service-ca-9c57cc56f-8krrp\" (UID: \"cb0a4d10-0131-49ec-97f5-e77f1f222cdd\") " pod="openshift-service-ca/service-ca-9c57cc56f-8krrp" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.814126 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cca2c2a-43ba-4b84-b10b-25053c6d7350-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-w6pjq\" (UID: \"4cca2c2a-43ba-4b84-b10b-25053c6d7350\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.814143 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-service-ca\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.814159 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/239fc97c-cb5a-4fa1-965e-7b64c90268ce-config-volume\") pod \"collect-profiles-29531520-xxvzq\" (UID: \"239fc97c-cb5a-4fa1-965e-7b64c90268ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.814206 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cb0a4d10-0131-49ec-97f5-e77f1f222cdd-signing-cabundle\") pod \"service-ca-9c57cc56f-8krrp\" (UID: \"cb0a4d10-0131-49ec-97f5-e77f1f222cdd\") " pod="openshift-service-ca/service-ca-9c57cc56f-8krrp" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.814223 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54cdfa0a-fdb0-4509-9d56-01194a25ee63-trusted-ca\") pod \"ingress-operator-5b745b69d9-ghwq4\" (UID: \"54cdfa0a-fdb0-4509-9d56-01194a25ee63\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.814246 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c15a2653-454b-42e4-85b5-87b99cc30198-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xs7nb\" (UID: \"c15a2653-454b-42e4-85b5-87b99cc30198\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.814262 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v9d5\" (UniqueName: \"kubernetes.io/projected/4cca2c2a-43ba-4b84-b10b-25053c6d7350-kube-api-access-7v9d5\") pod \"cluster-image-registry-operator-dc59b4c8b-w6pjq\" (UID: \"4cca2c2a-43ba-4b84-b10b-25053c6d7350\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.814280 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-oauth-serving-cert\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.814297 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3e3160a-60c7-424c-b5a3-53841213467d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c94hb\" (UID: \"c3e3160a-60c7-424c-b5a3-53841213467d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c94hb" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.814323 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54cdfa0a-fdb0-4509-9d56-01194a25ee63-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ghwq4\" (UID: \"54cdfa0a-fdb0-4509-9d56-01194a25ee63\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.814344 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f76k\" (UniqueName: \"kubernetes.io/projected/e312a49f-dc7a-49fc-9baf-3105fec587ae-kube-api-access-6f76k\") pod \"marketplace-operator-79b997595-99tkw\" (UID: \"e312a49f-dc7a-49fc-9baf-3105fec587ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" Feb 24 00:09:26 crc kubenswrapper[4824]: E0224 00:09:26.814370 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:27.314345046 +0000 UTC m=+231.303969505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.814431 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6zzs\" (UniqueName: \"kubernetes.io/projected/67b600b4-d056-4b5f-b75e-0502de432461-kube-api-access-h6zzs\") pod \"service-ca-operator-777779d784-6h296\" (UID: \"67b600b4-d056-4b5f-b75e-0502de432461\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6h296" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.816812 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cca2c2a-43ba-4b84-b10b-25053c6d7350-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-w6pjq\" (UID: \"4cca2c2a-43ba-4b84-b10b-25053c6d7350\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.817573 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/511cd2d5-0160-44f2-adf1-acbe5c8c28cf-config-volume\") pod \"dns-default-5n768\" (UID: \"511cd2d5-0160-44f2-adf1-acbe5c8c28cf\") " pod="openshift-dns/dns-default-5n768" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.817622 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/42d75b69-be96-43de-8687-444a81d8ebd5-csi-data-dir\") pod \"csi-hostpathplugin-dq9gz\" (UID: \"42d75b69-be96-43de-8687-444a81d8ebd5\") " pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.818966 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cb0a4d10-0131-49ec-97f5-e77f1f222cdd-signing-cabundle\") pod \"service-ca-9c57cc56f-8krrp\" (UID: \"cb0a4d10-0131-49ec-97f5-e77f1f222cdd\") " pod="openshift-service-ca/service-ca-9c57cc56f-8krrp" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.819044 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/42d75b69-be96-43de-8687-444a81d8ebd5-mountpoint-dir\") pod \"csi-hostpathplugin-dq9gz\" (UID: \"42d75b69-be96-43de-8687-444a81d8ebd5\") " pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.823086 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54cdfa0a-fdb0-4509-9d56-01194a25ee63-trusted-ca\") pod \"ingress-operator-5b745b69d9-ghwq4\" (UID: \"54cdfa0a-fdb0-4509-9d56-01194a25ee63\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.823437 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/42d75b69-be96-43de-8687-444a81d8ebd5-plugins-dir\") pod \"csi-hostpathplugin-dq9gz\" (UID: \"42d75b69-be96-43de-8687-444a81d8ebd5\") " pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.823483 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dxf6\" (UniqueName: \"kubernetes.io/projected/5b0ff99f-1e04-4e23-895a-a02a303c8daa-kube-api-access-5dxf6\") pod \"router-default-5444994796-fp4wq\" (UID: \"5b0ff99f-1e04-4e23-895a-a02a303c8daa\") " pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.824680 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67b600b4-d056-4b5f-b75e-0502de432461-config\") pod \"service-ca-operator-777779d784-6h296\" (UID: \"67b600b4-d056-4b5f-b75e-0502de432461\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6h296" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.825204 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3e3160a-60c7-424c-b5a3-53841213467d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c94hb\" (UID: \"c3e3160a-60c7-424c-b5a3-53841213467d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c94hb" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.825634 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/42d75b69-be96-43de-8687-444a81d8ebd5-socket-dir\") pod \"csi-hostpathplugin-dq9gz\" (UID: \"42d75b69-be96-43de-8687-444a81d8ebd5\") " pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.826387 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/757dc1d0-9507-4470-8496-9162b8999465-node-bootstrap-token\") pod \"machine-config-server-vvlvv\" (UID: \"757dc1d0-9507-4470-8496-9162b8999465\") " pod="openshift-machine-config-operator/machine-config-server-vvlvv" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.826447 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/42d75b69-be96-43de-8687-444a81d8ebd5-registration-dir\") pod \"csi-hostpathplugin-dq9gz\" (UID: \"42d75b69-be96-43de-8687-444a81d8ebd5\") " pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.826480 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/239fc97c-cb5a-4fa1-965e-7b64c90268ce-secret-volume\") pod \"collect-profiles-29531520-xxvzq\" (UID: \"239fc97c-cb5a-4fa1-965e-7b64c90268ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.827808 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e312a49f-dc7a-49fc-9baf-3105fec587ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-99tkw\" (UID: \"e312a49f-dc7a-49fc-9baf-3105fec587ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.828083 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-trusted-ca-bundle\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.829261 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-service-ca\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.829656 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3e3160a-60c7-424c-b5a3-53841213467d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c94hb\" (UID: \"c3e3160a-60c7-424c-b5a3-53841213467d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c94hb" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.830404 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e312a49f-dc7a-49fc-9baf-3105fec587ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-99tkw\" (UID: \"e312a49f-dc7a-49fc-9baf-3105fec587ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.831022 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c15a2653-454b-42e4-85b5-87b99cc30198-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xs7nb\" (UID: \"c15a2653-454b-42e4-85b5-87b99cc30198\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.831570 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7389587-c14d-45bd-b642-4ba3b5d7ac41-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-v9bck\" (UID: \"a7389587-c14d-45bd-b642-4ba3b5d7ac41\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9bck" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.832053 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/757dc1d0-9507-4470-8496-9162b8999465-certs\") pod \"machine-config-server-vvlvv\" (UID: \"757dc1d0-9507-4470-8496-9162b8999465\") " pod="openshift-machine-config-operator/machine-config-server-vvlvv" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.833529 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7389587-c14d-45bd-b642-4ba3b5d7ac41-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-v9bck\" (UID: \"a7389587-c14d-45bd-b642-4ba3b5d7ac41\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9bck" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.833722 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/782d3fe9-7b5b-4d44-a3e6-0efea9d617ea-cert\") pod \"ingress-canary-4tvd9\" (UID: \"782d3fe9-7b5b-4d44-a3e6-0efea9d617ea\") " pod="openshift-ingress-canary/ingress-canary-4tvd9" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.834214 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cb0a4d10-0131-49ec-97f5-e77f1f222cdd-signing-key\") pod \"service-ca-9c57cc56f-8krrp\" (UID: \"cb0a4d10-0131-49ec-97f5-e77f1f222cdd\") " pod="openshift-service-ca/service-ca-9c57cc56f-8krrp" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.834700 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67b600b4-d056-4b5f-b75e-0502de432461-serving-cert\") pod \"service-ca-operator-777779d784-6h296\" (UID: \"67b600b4-d056-4b5f-b75e-0502de432461\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6h296" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.834890 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc6c8d25-266d-4f40-9b7c-e1697b87db51-apiservice-cert\") pod \"packageserver-d55dfcdfc-d95c9\" (UID: \"fc6c8d25-266d-4f40-9b7c-e1697b87db51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.836558 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w68sp\" (UniqueName: \"kubernetes.io/projected/f66ddecd-538b-48bd-a335-e7f99181daa0-kube-api-access-w68sp\") pod \"apiserver-76f77b778f-xfl22\" (UID: \"f66ddecd-538b-48bd-a335-e7f99181daa0\") " pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.837683 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2e0a401-0fd7-499c-ac31-fc8cb0a64366-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2jqvq\" (UID: \"a2e0a401-0fd7-499c-ac31-fc8cb0a64366\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2jqvq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.838000 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/239fc97c-cb5a-4fa1-965e-7b64c90268ce-config-volume\") pod \"collect-profiles-29531520-xxvzq\" (UID: \"239fc97c-cb5a-4fa1-965e-7b64c90268ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.838019 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cca2c2a-43ba-4b84-b10b-25053c6d7350-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-w6pjq\" (UID: \"4cca2c2a-43ba-4b84-b10b-25053c6d7350\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.838318 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fc6c8d25-266d-4f40-9b7c-e1697b87db51-tmpfs\") pod \"packageserver-d55dfcdfc-d95c9\" (UID: \"fc6c8d25-266d-4f40-9b7c-e1697b87db51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.840001 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc6c8d25-266d-4f40-9b7c-e1697b87db51-webhook-cert\") pod \"packageserver-d55dfcdfc-d95c9\" (UID: \"fc6c8d25-266d-4f40-9b7c-e1697b87db51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.841391 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-console-oauth-config\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.841572 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-oauth-serving-cert\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.842244 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc987\" (UniqueName: \"kubernetes.io/projected/f09bc4be-bc94-4c63-93ec-4bc2fef07d1b-kube-api-access-qc987\") pod \"image-pruner-29531520-969xh\" (UID: \"f09bc4be-bc94-4c63-93ec-4bc2fef07d1b\") " pod="openshift-image-registry/image-pruner-29531520-969xh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.843882 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/511cd2d5-0160-44f2-adf1-acbe5c8c28cf-metrics-tls\") pod \"dns-default-5n768\" (UID: \"511cd2d5-0160-44f2-adf1-acbe5c8c28cf\") " pod="openshift-dns/dns-default-5n768" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.845511 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/54cdfa0a-fdb0-4509-9d56-01194a25ee63-metrics-tls\") pod \"ingress-operator-5b745b69d9-ghwq4\" (UID: \"54cdfa0a-fdb0-4509-9d56-01194a25ee63\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.850633 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8dlbn" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.851686 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-console-config\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.852261 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c15a2653-454b-42e4-85b5-87b99cc30198-srv-cert\") pod \"olm-operator-6b444d44fb-xs7nb\" (UID: \"c15a2653-454b-42e4-85b5-87b99cc30198\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.856285 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-console-serving-cert\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.870291 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-r4c4b"] Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.870394 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7jnm" event={"ID":"390f4e92-8639-45bb-b91c-a55773bfa293","Type":"ContainerStarted","Data":"9396a3100e1df571d38d3b83628363dae8c0bfbe194d9374bbc454ad754af95b"} Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.871716 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb7ql\" (UniqueName: \"kubernetes.io/projected/ac86b042-947d-402f-a7c8-bb0a69d3f86e-kube-api-access-fb7ql\") pod \"multus-admission-controller-857f4d67dd-cxlfh\" (UID: \"ac86b042-947d-402f-a7c8-bb0a69d3f86e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cxlfh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.887006 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7dh8\" (UniqueName: \"kubernetes.io/projected/2f39cab4-77fc-4641-9e84-c01b0dedc300-kube-api-access-r7dh8\") pod \"console-operator-58897d9998-p5tqf\" (UID: \"2f39cab4-77fc-4641-9e84-c01b0dedc300\") " pod="openshift-console-operator/console-operator-58897d9998-p5tqf" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.891878 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pvqdd"] Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.897499 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.900395 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d3de326-8359-4a7c-84da-57a071a929d7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-b2k2f\" (UID: \"5d3de326-8359-4a7c-84da-57a071a929d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2k2f" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.904475 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-7vdck"] Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.915345 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: E0224 00:09:26.915879 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:27.415865694 +0000 UTC m=+231.405490163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.924808 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-cxlfh" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.925334 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp68j\" (UniqueName: \"kubernetes.io/projected/44376c4d-d433-41a2-bdc1-22a9792e7640-kube-api-access-lp68j\") pod \"cluster-samples-operator-665b6dd947-tjdhm\" (UID: \"44376c4d-d433-41a2-bdc1-22a9792e7640\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjdhm" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.937583 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w65w\" (UniqueName: \"kubernetes.io/projected/a4a8a6d0-c052-4bca-9be8-dddd6d2ef017-kube-api-access-5w65w\") pod \"catalog-operator-68c6474976-rvv9j\" (UID: \"a4a8a6d0-c052-4bca-9be8-dddd6d2ef017\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.961873 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdkzl\" (UniqueName: \"kubernetes.io/projected/9016587d-3cd5-46d7-bd50-586cd32933f7-kube-api-access-cdkzl\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.978786 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9016587d-3cd5-46d7-bd50-586cd32933f7-bound-sa-token\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:26 crc kubenswrapper[4824]: I0224 00:09:26.995216 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r"] Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.017989 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:27 crc kubenswrapper[4824]: E0224 00:09:27.018192 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:27.518136732 +0000 UTC m=+231.507761201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.019312 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:27 crc kubenswrapper[4824]: E0224 00:09:27.019833 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:27.519818806 +0000 UTC m=+231.509443275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.021837 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g"] Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.024057 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjdhm" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.030498 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr4rm\" (UniqueName: \"kubernetes.io/projected/54cdfa0a-fdb0-4509-9d56-01194a25ee63-kube-api-access-sr4rm\") pod \"ingress-operator-5b745b69d9-ghwq4\" (UID: \"54cdfa0a-fdb0-4509-9d56-01194a25ee63\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.035345 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:27 crc kubenswrapper[4824]: W0224 00:09:27.047875 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c4e1d48_7f8d_44b6_97b3_3ceccb35385b.slice/crio-d168e7b48bb45e2f3eeaabaaae34172927392199794b2e25a747dcf303c33d18 WatchSource:0}: Error finding container d168e7b48bb45e2f3eeaabaaae34172927392199794b2e25a747dcf303c33d18: Status 404 returned error can't find the container with id d168e7b48bb45e2f3eeaabaaae34172927392199794b2e25a747dcf303c33d18 Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.049621 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.053437 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f76k\" (UniqueName: \"kubernetes.io/projected/e312a49f-dc7a-49fc-9baf-3105fec587ae-kube-api-access-6f76k\") pod \"marketplace-operator-79b997595-99tkw\" (UID: \"e312a49f-dc7a-49fc-9baf-3105fec587ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.079274 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9glzv\" (UniqueName: \"kubernetes.io/projected/239fc97c-cb5a-4fa1-965e-7b64c90268ce-kube-api-access-9glzv\") pod \"collect-profiles-29531520-xxvzq\" (UID: \"239fc97c-cb5a-4fa1-965e-7b64c90268ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.080686 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-p5tqf" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.085366 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvmgf\" (UniqueName: \"kubernetes.io/projected/757dc1d0-9507-4470-8496-9162b8999465-kube-api-access-xvmgf\") pod \"machine-config-server-vvlvv\" (UID: \"757dc1d0-9507-4470-8496-9162b8999465\") " pod="openshift-machine-config-operator/machine-config-server-vvlvv" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.104406 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv"] Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.104818 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6zzs\" (UniqueName: \"kubernetes.io/projected/67b600b4-d056-4b5f-b75e-0502de432461-kube-api-access-h6zzs\") pod \"service-ca-operator-777779d784-6h296\" (UID: \"67b600b4-d056-4b5f-b75e-0502de432461\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6h296" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.115480 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-kh6hg"] Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.124044 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsdfv\" (UniqueName: \"kubernetes.io/projected/fc6c8d25-266d-4f40-9b7c-e1697b87db51-kube-api-access-gsdfv\") pod \"packageserver-d55dfcdfc-d95c9\" (UID: \"fc6c8d25-266d-4f40-9b7c-e1697b87db51\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.127617 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vvlvv" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.132045 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jf5jw"] Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.135960 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2k2f" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.135991 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:27 crc kubenswrapper[4824]: E0224 00:09:27.136894 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:27.636875131 +0000 UTC m=+231.626499600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.138090 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29531520-969xh" Feb 24 00:09:27 crc kubenswrapper[4824]: W0224 00:09:27.143198 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53344821_2f26_459a_9e42_003f3f1b5a87.slice/crio-e2e6496dd9b65bd1a5092c51c904c096168802f1d2367b1a5b29b292de68f360 WatchSource:0}: Error finding container e2e6496dd9b65bd1a5092c51c904c096168802f1d2367b1a5b29b292de68f360: Status 404 returned error can't find the container with id e2e6496dd9b65bd1a5092c51c904c096168802f1d2367b1a5b29b292de68f360 Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.146181 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hgp7\" (UniqueName: \"kubernetes.io/projected/cb0a4d10-0131-49ec-97f5-e77f1f222cdd-kube-api-access-2hgp7\") pod \"service-ca-9c57cc56f-8krrp\" (UID: \"cb0a4d10-0131-49ec-97f5-e77f1f222cdd\") " pod="openshift-service-ca/service-ca-9c57cc56f-8krrp" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.165643 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvwdb\" (UniqueName: \"kubernetes.io/projected/a2e0a401-0fd7-499c-ac31-fc8cb0a64366-kube-api-access-hvwdb\") pod \"package-server-manager-789f6589d5-2jqvq\" (UID: \"a2e0a401-0fd7-499c-ac31-fc8cb0a64366\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2jqvq" Feb 24 00:09:27 crc kubenswrapper[4824]: W0224 00:09:27.166562 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f8699c7_58f5_4a80_b5af_5403cb178676.slice/crio-ac2733fb1a358b53d6cecdc04c18db6dd2ffab884268bd9f970b2082f8018667 WatchSource:0}: Error finding container ac2733fb1a358b53d6cecdc04c18db6dd2ffab884268bd9f970b2082f8018667: Status 404 returned error can't find the container with id ac2733fb1a358b53d6cecdc04c18db6dd2ffab884268bd9f970b2082f8018667 Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.179148 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb8wj\" (UniqueName: \"kubernetes.io/projected/511cd2d5-0160-44f2-adf1-acbe5c8c28cf-kube-api-access-qb8wj\") pod \"dns-default-5n768\" (UID: \"511cd2d5-0160-44f2-adf1-acbe5c8c28cf\") " pod="openshift-dns/dns-default-5n768" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.185960 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.203287 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzbm6\" (UniqueName: \"kubernetes.io/projected/782d3fe9-7b5b-4d44-a3e6-0efea9d617ea-kube-api-access-bzbm6\") pod \"ingress-canary-4tvd9\" (UID: \"782d3fe9-7b5b-4d44-a3e6-0efea9d617ea\") " pod="openshift-ingress-canary/ingress-canary-4tvd9" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.225766 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfr7v\" (UniqueName: \"kubernetes.io/projected/ac257861-33c1-4e92-9d58-bb7351f6316e-kube-api-access-gfr7v\") pod \"migrator-59844c95c7-5ltfz\" (UID: \"ac257861-33c1-4e92-9d58-bb7351f6316e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5ltfz" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.237905 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:27 crc kubenswrapper[4824]: E0224 00:09:27.238619 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:27.738573634 +0000 UTC m=+231.728198103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.238823 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdcnv\" (UniqueName: \"kubernetes.io/projected/a7389587-c14d-45bd-b642-4ba3b5d7ac41-kube-api-access-xdcnv\") pod \"kube-storage-version-migrator-operator-b67b599dd-v9bck\" (UID: \"a7389587-c14d-45bd-b642-4ba3b5d7ac41\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9bck" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.259262 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v9d5\" (UniqueName: \"kubernetes.io/projected/4cca2c2a-43ba-4b84-b10b-25053c6d7350-kube-api-access-7v9d5\") pod \"cluster-image-registry-operator-dc59b4c8b-w6pjq\" (UID: \"4cca2c2a-43ba-4b84-b10b-25053c6d7350\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.259568 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cxlfh"] Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.275451 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9bck" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.275504 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.280750 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ddjs\" (UniqueName: \"kubernetes.io/projected/5f4f79cd-ada9-4ec7-b779-94d97bdadc97-kube-api-access-2ddjs\") pod \"console-f9d7485db-zlnwh\" (UID: \"5f4f79cd-ada9-4ec7-b779-94d97bdadc97\") " pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.284246 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5ltfz" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.299820 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8krrp" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.309689 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jm7qk"] Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.315580 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cca2c2a-43ba-4b84-b10b-25053c6d7350-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-w6pjq\" (UID: \"4cca2c2a-43ba-4b84-b10b-25053c6d7350\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.315719 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.320017 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q8hvw"] Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.325268 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnxwh\" (UniqueName: \"kubernetes.io/projected/42d75b69-be96-43de-8687-444a81d8ebd5-kube-api-access-bnxwh\") pod \"csi-hostpathplugin-dq9gz\" (UID: \"42d75b69-be96-43de-8687-444a81d8ebd5\") " pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:27 crc kubenswrapper[4824]: W0224 00:09:27.329063 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac86b042_947d_402f_a7c8_bb0a69d3f86e.slice/crio-3854cfb9a082003d28ae514db1807e177339f5296b54d5724d5dc75eba19a2ff WatchSource:0}: Error finding container 3854cfb9a082003d28ae514db1807e177339f5296b54d5724d5dc75eba19a2ff: Status 404 returned error can't find the container with id 3854cfb9a082003d28ae514db1807e177339f5296b54d5724d5dc75eba19a2ff Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.329685 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx"] Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.334085 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-ksmk7"] Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.336229 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.338145 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4thqj\" (UniqueName: \"kubernetes.io/projected/c15a2653-454b-42e4-85b5-87b99cc30198-kube-api-access-4thqj\") pod \"olm-operator-6b444d44fb-xs7nb\" (UID: \"c15a2653-454b-42e4-85b5-87b99cc30198\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.338564 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:27 crc kubenswrapper[4824]: E0224 00:09:27.338923 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:27.838908551 +0000 UTC m=+231.828533020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.346870 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2jqvq" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.354742 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6h296" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.362039 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3e3160a-60c7-424c-b5a3-53841213467d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c94hb\" (UID: \"c3e3160a-60c7-424c-b5a3-53841213467d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c94hb" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.362357 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.370942 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4tvd9" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.383025 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/54cdfa0a-fdb0-4509-9d56-01194a25ee63-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ghwq4\" (UID: \"54cdfa0a-fdb0-4509-9d56-01194a25ee63\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.407359 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.413680 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-h7djl"] Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.426494 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8dlbn"] Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.428895 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xfl22"] Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.429059 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5n768" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.432288 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwm9v"] Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.440171 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:27 crc kubenswrapper[4824]: E0224 00:09:27.440610 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:27.940592803 +0000 UTC m=+231.930217272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:27 crc kubenswrapper[4824]: W0224 00:09:27.453053 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb14c3eec_796c_48b0_b4fe_67cb327f2de7.slice/crio-b75b4015cdaa1d2a42af15d31cc32c5412fd69494b8eaf3585095e889a5412d0 WatchSource:0}: Error finding container b75b4015cdaa1d2a42af15d31cc32c5412fd69494b8eaf3585095e889a5412d0: Status 404 returned error can't find the container with id b75b4015cdaa1d2a42af15d31cc32c5412fd69494b8eaf3585095e889a5412d0 Feb 24 00:09:27 crc kubenswrapper[4824]: W0224 00:09:27.505609 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod836fad19_b7d1_434c_9fd8_faf3eb1d80d1.slice/crio-87e6107c078e7479ad93e38388a25c73c91e9c6c58b31e20dcb1b47002f12310 WatchSource:0}: Error finding container 87e6107c078e7479ad93e38388a25c73c91e9c6c58b31e20dcb1b47002f12310: Status 404 returned error can't find the container with id 87e6107c078e7479ad93e38388a25c73c91e9c6c58b31e20dcb1b47002f12310 Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.541459 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.552430 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq" Feb 24 00:09:27 crc kubenswrapper[4824]: E0224 00:09:27.554126 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:28.054090385 +0000 UTC m=+232.043714854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.554452 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.605280 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c94hb" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.619336 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j"] Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.626837 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.643713 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:27 crc kubenswrapper[4824]: E0224 00:09:27.644388 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:28.144370299 +0000 UTC m=+232.133994768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.650705 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjdhm"] Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.690776 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-p5tqf"] Feb 24 00:09:27 crc kubenswrapper[4824]: W0224 00:09:27.712421 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4a8a6d0_c052_4bca_9be8_dddd6d2ef017.slice/crio-69b8b9e353f95a2492ada4ea39468dc13589ceb0616170628777a130c44de6c3 WatchSource:0}: Error finding container 69b8b9e353f95a2492ada4ea39468dc13589ceb0616170628777a130c44de6c3: Status 404 returned error can't find the container with id 69b8b9e353f95a2492ada4ea39468dc13589ceb0616170628777a130c44de6c3 Feb 24 00:09:27 crc kubenswrapper[4824]: W0224 00:09:27.744023 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f39cab4_77fc_4641_9e84_c01b0dedc300.slice/crio-f2b04c6997b421395ed405ee1136616fafacd3f6f41750220fb508cbe7531926 WatchSource:0}: Error finding container f2b04c6997b421395ed405ee1136616fafacd3f6f41750220fb508cbe7531926: Status 404 returned error can't find the container with id f2b04c6997b421395ed405ee1136616fafacd3f6f41750220fb508cbe7531926 Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.747054 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:27 crc kubenswrapper[4824]: E0224 00:09:27.747823 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:28.247793396 +0000 UTC m=+232.237417865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.765075 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29531520-969xh"] Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.849356 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:27 crc kubenswrapper[4824]: E0224 00:09:27.849853 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:28.349836368 +0000 UTC m=+232.339460827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.868347 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2k2f"] Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.885017 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cxlfh" event={"ID":"ac86b042-947d-402f-a7c8-bb0a69d3f86e","Type":"ContainerStarted","Data":"3854cfb9a082003d28ae514db1807e177339f5296b54d5724d5dc75eba19a2ff"} Feb 24 00:09:27 crc kubenswrapper[4824]: W0224 00:09:27.885362 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf09bc4be_bc94_4c63_93ec_4bc2fef07d1b.slice/crio-4fbc9ac5dc3c69b5711041434cb98b9bdb56115d74b5092502a2732ff4babe43 WatchSource:0}: Error finding container 4fbc9ac5dc3c69b5711041434cb98b9bdb56115d74b5092502a2732ff4babe43: Status 404 returned error can't find the container with id 4fbc9ac5dc3c69b5711041434cb98b9bdb56115d74b5092502a2732ff4babe43 Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.886134 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j" event={"ID":"a4a8a6d0-c052-4bca-9be8-dddd6d2ef017","Type":"ContainerStarted","Data":"69b8b9e353f95a2492ada4ea39468dc13589ceb0616170628777a130c44de6c3"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.888246 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-p5tqf" event={"ID":"2f39cab4-77fc-4641-9e84-c01b0dedc300","Type":"ContainerStarted","Data":"f2b04c6997b421395ed405ee1136616fafacd3f6f41750220fb508cbe7531926"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.895330 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7jnm" event={"ID":"390f4e92-8639-45bb-b91c-a55773bfa293","Type":"ContainerStarted","Data":"82dc0e47877b1314fb573f81841d1421f5bbdd1583f5cff3bce8a90521be9640"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.901345 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" event={"ID":"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115","Type":"ContainerStarted","Data":"3b88c71c7f646381790daa9790f722b3637990f46735a62cbc3312b308a3ab9b"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.904380 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" event={"ID":"01ed973e-7ed7-41ec-bea9-69d8c86e19ed","Type":"ContainerStarted","Data":"6a84a663d942eb0b5a72d9e552d94ef02c3769dcfcad6ef19a67b74eca023607"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.911880 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q8hvw" event={"ID":"13bff804-f118-473b-a547-433aed671b46","Type":"ContainerStarted","Data":"f3f6deb1282d355ebf00359716237c84d49ff6c7dfd3fbfcf4ed7963a90f8deb"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.924204 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-r4c4b" event={"ID":"581e69ae-c21a-4a9e-b1ea-9c38256d7b30","Type":"ContainerStarted","Data":"deb3616bdfcc08678302c0e0617b53f7bdd5f57fee7e5facc2929f3b91c7322b"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.924264 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-r4c4b" event={"ID":"581e69ae-c21a-4a9e-b1ea-9c38256d7b30","Type":"ContainerStarted","Data":"9b97a0a4486ae2cc63c665caa223ddb2e50e244e1359fe03d78a031900e54400"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.924769 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-r4c4b" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.926464 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xfl22" event={"ID":"f66ddecd-538b-48bd-a335-e7f99181daa0","Type":"ContainerStarted","Data":"79cda5a3afc87515700b0f0131cdecfbd8ca4df84513146d57fe549bf0db6cd6"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.928128 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" event={"ID":"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110","Type":"ContainerStarted","Data":"530327b92b782379bd097164c03bb9229bcf211e2c8a3fb1f8745b4be7f7b5ad"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.929765 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" event={"ID":"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b","Type":"ContainerStarted","Data":"d168e7b48bb45e2f3eeaabaaae34172927392199794b2e25a747dcf303c33d18"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.930927 4824 patch_prober.go:28] interesting pod/downloads-7954f5f757-r4c4b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.931103 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r4c4b" podUID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.933593 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ksmk7" event={"ID":"b14c3eec-796c-48b0-b4fe-67cb327f2de7","Type":"ContainerStarted","Data":"b75b4015cdaa1d2a42af15d31cc32c5412fd69494b8eaf3585095e889a5412d0"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.939534 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" event={"ID":"2595ad7b-d8cc-46d2-b0db-60e7ce636e9f","Type":"ContainerStarted","Data":"bef25273eb059dd1ab72abd3983263df0ab7e759d96ada14df8ba73000a1f593"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.939585 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" event={"ID":"2595ad7b-d8cc-46d2-b0db-60e7ce636e9f","Type":"ContainerStarted","Data":"8932887d5d14bb497fe31bc893f05b1b68140314607684cd919a64d5341e6082"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.947897 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8dlbn" event={"ID":"bb795a58-0029-416c-84fa-ae83cf338858","Type":"ContainerStarted","Data":"9b6f6c895fff8aaaed37faec3b49457fd54c815cfdb0c7145450f3fd7a32f97d"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.949741 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" event={"ID":"836fad19-b7d1-434c-9fd8-faf3eb1d80d1","Type":"ContainerStarted","Data":"87e6107c078e7479ad93e38388a25c73c91e9c6c58b31e20dcb1b47002f12310"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.950248 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:27 crc kubenswrapper[4824]: E0224 00:09:27.950736 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:28.4507151 +0000 UTC m=+232.440339569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.951461 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-h7djl" event={"ID":"8118fe3c-1479-4634-9b64-9350991d909d","Type":"ContainerStarted","Data":"6c28fd392465c1155e5f0751f4879dd6b35f5e6f98909953be239d553f455816"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.952081 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv" event={"ID":"33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4","Type":"ContainerStarted","Data":"98f7f928cc84639f65f90fdefcf2fd2fbae904cbd165407fbf7d9753ff24601b"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.953057 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" event={"ID":"6f8699c7-58f5-4a80-b5af-5403cb178676","Type":"ContainerStarted","Data":"ac2733fb1a358b53d6cecdc04c18db6dd2ffab884268bd9f970b2082f8018667"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.953821 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74" event={"ID":"4349e271-7758-4dcf-9053-fbc984436a8b","Type":"ContainerStarted","Data":"1a1314c923a76d17830d326aab8824c565167be258b1dd220e62acb8d1e4eb68"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.954439 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwm9v" event={"ID":"250422bb-6e8f-4622-a456-ded5825e7c86","Type":"ContainerStarted","Data":"621b31c4a6ab74d4e0dd43cf066f7ee99350da0fe63058e5278375c53595a9f3"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.958317 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vvlvv" event={"ID":"757dc1d0-9507-4470-8496-9162b8999465","Type":"ContainerStarted","Data":"9cd2b91af806441c9262996df56b5b14e5883d9bdae0028c3be7a8f609baf215"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.959905 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kh6hg" event={"ID":"53344821-2f26-459a-9e42-003f3f1b5a87","Type":"ContainerStarted","Data":"e2e6496dd9b65bd1a5092c51c904c096168802f1d2367b1a5b29b292de68f360"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.963252 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fp4wq" event={"ID":"5b0ff99f-1e04-4e23-895a-a02a303c8daa","Type":"ContainerStarted","Data":"8a04e1120c09a51d0bfbf418056c056935cda72eb55f40882a439cb22de54ad2"} Feb 24 00:09:27 crc kubenswrapper[4824]: I0224 00:09:27.963319 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fp4wq" event={"ID":"5b0ff99f-1e04-4e23-895a-a02a303c8daa","Type":"ContainerStarted","Data":"bbdcb3c8f2541317f6a10dae8c71fbfc2f6357afb4cd3a5b35094b0739aa7bdb"} Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.025674 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9"] Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.055971 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:28 crc kubenswrapper[4824]: E0224 00:09:28.056343 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:28.556327865 +0000 UTC m=+232.545952334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.087803 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2jqvq"] Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.128383 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq"] Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.155251 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6h296"] Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.157527 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:28 crc kubenswrapper[4824]: E0224 00:09:28.158095 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:28.658072789 +0000 UTC m=+232.647697258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.181593 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5n768"] Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.216926 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c94hb"] Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.260273 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:28 crc kubenswrapper[4824]: E0224 00:09:28.260748 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:28.760728326 +0000 UTC m=+232.750352795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.370365 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:28 crc kubenswrapper[4824]: E0224 00:09:28.372994 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:28.872971695 +0000 UTC m=+232.862596164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.397713 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5ltfz"] Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.424062 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9bck"] Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.462014 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8krrp"] Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.474797 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:28 crc kubenswrapper[4824]: E0224 00:09:28.475252 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:28.975236163 +0000 UTC m=+232.964860632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.576170 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:28 crc kubenswrapper[4824]: E0224 00:09:28.576645 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:29.076620587 +0000 UTC m=+233.066245056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:28 crc kubenswrapper[4824]: W0224 00:09:28.652842 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc6c8d25_266d_4f40_9b7c_e1697b87db51.slice/crio-cdcdb1571f7e1faf1e4a1c617eb8f73363fd3c5be2798a32b2b3ec63648683db WatchSource:0}: Error finding container cdcdb1571f7e1faf1e4a1c617eb8f73363fd3c5be2798a32b2b3ec63648683db: Status 404 returned error can't find the container with id cdcdb1571f7e1faf1e4a1c617eb8f73363fd3c5be2798a32b2b3ec63648683db Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.680563 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:28 crc kubenswrapper[4824]: E0224 00:09:28.681315 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:29.181272877 +0000 UTC m=+233.170897526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:28 crc kubenswrapper[4824]: W0224 00:09:28.725063 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67b600b4_d056_4b5f_b75e_0502de432461.slice/crio-603f42335849da53e1c1b5dfc8f5b1ff9f6569fbac08f8c6d978eaa1f0ede865 WatchSource:0}: Error finding container 603f42335849da53e1c1b5dfc8f5b1ff9f6569fbac08f8c6d978eaa1f0ede865: Status 404 returned error can't find the container with id 603f42335849da53e1c1b5dfc8f5b1ff9f6569fbac08f8c6d978eaa1f0ede865 Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.728766 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-99tkw"] Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.783164 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:28 crc kubenswrapper[4824]: E0224 00:09:28.783962 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:29.283938645 +0000 UTC m=+233.273563114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.893151 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:28 crc kubenswrapper[4824]: E0224 00:09:28.895932 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:29.393507054 +0000 UTC m=+233.383131523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.898365 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:28 crc kubenswrapper[4824]: W0224 00:09:28.905238 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb0a4d10_0131_49ec_97f5_e77f1f222cdd.slice/crio-24f1d15c6dfa786e261f3dfad2f6f8184b4edad621931cf5ea95c0b98429c259 WatchSource:0}: Error finding container 24f1d15c6dfa786e261f3dfad2f6f8184b4edad621931cf5ea95c0b98429c259: Status 404 returned error can't find the container with id 24f1d15c6dfa786e261f3dfad2f6f8184b4edad621931cf5ea95c0b98429c259 Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.905575 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.905621 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 24 00:09:28 crc kubenswrapper[4824]: W0224 00:09:28.955140 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac257861_33c1_4e92_9d58_bb7351f6316e.slice/crio-576206ef4cd543d1284208fe093beda1ae91b3259cc7628242b381f11e37105c WatchSource:0}: Error finding container 576206ef4cd543d1284208fe093beda1ae91b3259cc7628242b381f11e37105c: Status 404 returned error can't find the container with id 576206ef4cd543d1284208fe093beda1ae91b3259cc7628242b381f11e37105c Feb 24 00:09:28 crc kubenswrapper[4824]: W0224 00:09:28.972554 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode312a49f_dc7a_49fc_9baf_3105fec587ae.slice/crio-1e7b695fbb51788dd119d9e0ae76024be2038ddb563a00cf87c9d5c4544df61f WatchSource:0}: Error finding container 1e7b695fbb51788dd119d9e0ae76024be2038ddb563a00cf87c9d5c4544df61f: Status 404 returned error can't find the container with id 1e7b695fbb51788dd119d9e0ae76024be2038ddb563a00cf87c9d5c4544df61f Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.977466 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5n768" event={"ID":"511cd2d5-0160-44f2-adf1-acbe5c8c28cf","Type":"ContainerStarted","Data":"e43f346005a2d3ccf7b602123427aa11d0ed8ad48abf7d286256ba72ef490e6f"} Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.982573 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2k2f" event={"ID":"5d3de326-8359-4a7c-84da-57a071a929d7","Type":"ContainerStarted","Data":"de1cdc762d748da1200dd4014fc8eb8bd10989af0ebeebbb6cf5cdb6d9e440ce"} Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.986201 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" event={"ID":"6f8699c7-58f5-4a80-b5af-5403cb178676","Type":"ContainerStarted","Data":"fc75bfe4b562302aad30993aa2a68489589d802790238c3eeef171430ffcd747"} Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.987239 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.992814 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c94hb" event={"ID":"c3e3160a-60c7-424c-b5a3-53841213467d","Type":"ContainerStarted","Data":"a573ee6ab5f7e155f1846057ab13689cdd9186f5eda034467c0aadf412cd950e"} Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.993296 4824 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-jf5jw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.993345 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" podUID="6f8699c7-58f5-4a80-b5af-5403cb178676" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.993736 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:28 crc kubenswrapper[4824]: E0224 00:09:28.994057 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:29.494037936 +0000 UTC m=+233.483662405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.994202 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:28 crc kubenswrapper[4824]: E0224 00:09:28.994549 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:29.494540359 +0000 UTC m=+233.484164828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:28 crc kubenswrapper[4824]: I0224 00:09:28.995841 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" event={"ID":"d0c9bb52-a40f-4e1d-ba3d-fbb14eb74110","Type":"ContainerStarted","Data":"1a255e62aab2f5564fe5ff78c6845daa8f4591ba62c038ccf2871069a11dc2b4"} Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.000654 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq" event={"ID":"239fc97c-cb5a-4fa1-965e-7b64c90268ce","Type":"ContainerStarted","Data":"4d0da2c3da00c6dd6cf100ba43dd4048f42c65ed90df04df6d04e96db17f2c53"} Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.004617 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv" event={"ID":"33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4","Type":"ContainerStarted","Data":"69873e9d9e5bc06a933ebde7f766baf24b81f7e550ee3aca2edb4f71f6eb32ac"} Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.004677 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29531520-969xh" event={"ID":"f09bc4be-bc94-4c63-93ec-4bc2fef07d1b","Type":"ContainerStarted","Data":"4fbc9ac5dc3c69b5711041434cb98b9bdb56115d74b5092502a2732ff4babe43"} Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.006397 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" event={"ID":"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b","Type":"ContainerStarted","Data":"b81e27c4f382da70441b9aeabc639347240cb4f01fe8b81ba75c60688c944968"} Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.006934 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.017647 4824 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-9k27r container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.017687 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" podUID="1c4e1d48-7f8d-44b6-97b3-3ceccb35385b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.057605 4824 generic.go:334] "Generic (PLEG): container finished" podID="01ed973e-7ed7-41ec-bea9-69d8c86e19ed" containerID="91ed045e95f66b7afb1a6cf84984b9da73731bffb60c54002e215cbe84b34c12" exitCode=0 Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.057783 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" event={"ID":"01ed973e-7ed7-41ec-bea9-69d8c86e19ed","Type":"ContainerDied","Data":"91ed045e95f66b7afb1a6cf84984b9da73731bffb60c54002e215cbe84b34c12"} Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.095984 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:29 crc kubenswrapper[4824]: E0224 00:09:29.100733 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:29.600683828 +0000 UTC m=+233.590308297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.109778 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjdhm" event={"ID":"44376c4d-d433-41a2-bdc1-22a9792e7640","Type":"ContainerStarted","Data":"b26d2427ecad64c84ce29d1b0b8b771dd0b20b42de4768633a17bcd1d64a93f9"} Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.120244 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kh6hg" event={"ID":"53344821-2f26-459a-9e42-003f3f1b5a87","Type":"ContainerStarted","Data":"416937b951cda54739a4849ddf48f54f63846aeb5fcdf83281d98f16d49cb1a5"} Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.125137 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9bck" event={"ID":"a7389587-c14d-45bd-b642-4ba3b5d7ac41","Type":"ContainerStarted","Data":"90172e64e033d046b55bfd59bd0d4866270173465c33f5995b3a5117cdf52d53"} Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.127966 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8krrp" event={"ID":"cb0a4d10-0131-49ec-97f5-e77f1f222cdd","Type":"ContainerStarted","Data":"24f1d15c6dfa786e261f3dfad2f6f8184b4edad621931cf5ea95c0b98429c259"} Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.138319 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" event={"ID":"fc6c8d25-266d-4f40-9b7c-e1697b87db51","Type":"ContainerStarted","Data":"cdcdb1571f7e1faf1e4a1c617eb8f73363fd3c5be2798a32b2b3ec63648683db"} Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.141931 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6h296" event={"ID":"67b600b4-d056-4b5f-b75e-0502de432461","Type":"ContainerStarted","Data":"603f42335849da53e1c1b5dfc8f5b1ff9f6569fbac08f8c6d978eaa1f0ede865"} Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.145853 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2jqvq" event={"ID":"a2e0a401-0fd7-499c-ac31-fc8cb0a64366","Type":"ContainerStarted","Data":"5d56999380ffc886b9f74be0d31ebfa6f8bff94423f32fdfacad2a5b80cac8ac"} Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.147720 4824 patch_prober.go:28] interesting pod/downloads-7954f5f757-r4c4b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.147796 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r4c4b" podUID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.151472 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dq9gz"] Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.189767 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-r4c4b" podStartSLOduration=159.18974776 podStartE2EDuration="2m39.18974776s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:29.181549215 +0000 UTC m=+233.171173694" watchObservedRunningTime="2026-02-24 00:09:29.18974776 +0000 UTC m=+233.179372229" Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.204706 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:29 crc kubenswrapper[4824]: E0224 00:09:29.207047 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:29.707032383 +0000 UTC m=+233.696656852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.261771 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m7jnm" podStartSLOduration=160.261745775 podStartE2EDuration="2m40.261745775s" podCreationTimestamp="2026-02-24 00:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:29.254333351 +0000 UTC m=+233.243957820" watchObservedRunningTime="2026-02-24 00:09:29.261745775 +0000 UTC m=+233.251370244" Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.294169 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb"] Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.297457 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-fp4wq" podStartSLOduration=159.297416729 podStartE2EDuration="2m39.297416729s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:29.295396636 +0000 UTC m=+233.285021105" watchObservedRunningTime="2026-02-24 00:09:29.297416729 +0000 UTC m=+233.287041198" Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.307287 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:29 crc kubenswrapper[4824]: E0224 00:09:29.308969 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:29.808946491 +0000 UTC m=+233.798570960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.325738 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4"] Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.332305 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zlnwh"] Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.381216 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq"] Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.394663 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" podStartSLOduration=160.394635634 podStartE2EDuration="2m40.394635634s" podCreationTimestamp="2026-02-24 00:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:29.3471209 +0000 UTC m=+233.336745369" watchObservedRunningTime="2026-02-24 00:09:29.394635634 +0000 UTC m=+233.384260103" Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.398279 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-pvqdd" podStartSLOduration=159.398251119 podStartE2EDuration="2m39.398251119s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:29.372647819 +0000 UTC m=+233.362272298" watchObservedRunningTime="2026-02-24 00:09:29.398251119 +0000 UTC m=+233.387875588" Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.398713 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4tvd9"] Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.410559 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:29 crc kubenswrapper[4824]: E0224 00:09:29.411069 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:29.911054684 +0000 UTC m=+233.900679153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.460507 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" podStartSLOduration=159.460487359 podStartE2EDuration="2m39.460487359s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:29.456051933 +0000 UTC m=+233.445676402" watchObservedRunningTime="2026-02-24 00:09:29.460487359 +0000 UTC m=+233.450111828" Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.463706 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-7vdck" podStartSLOduration=160.463697723 podStartE2EDuration="2m40.463697723s" podCreationTimestamp="2026-02-24 00:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:29.421691163 +0000 UTC m=+233.411315652" watchObservedRunningTime="2026-02-24 00:09:29.463697723 +0000 UTC m=+233.453322202" Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.511380 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:29 crc kubenswrapper[4824]: E0224 00:09:29.512072 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:30.012026298 +0000 UTC m=+234.001650767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.613947 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:29 crc kubenswrapper[4824]: E0224 00:09:29.614414 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:30.114391748 +0000 UTC m=+234.104016387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.715146 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:29 crc kubenswrapper[4824]: E0224 00:09:29.716047 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:30.21603011 +0000 UTC m=+234.205654579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.818351 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:29 crc kubenswrapper[4824]: E0224 00:09:29.818815 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:30.31880204 +0000 UTC m=+234.308426509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.919982 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:29 crc kubenswrapper[4824]: E0224 00:09:29.920035 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:30.42000509 +0000 UTC m=+234.409629559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.920599 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:29 crc kubenswrapper[4824]: E0224 00:09:29.921001 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:30.420985996 +0000 UTC m=+234.410610465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.937755 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:09:29 crc kubenswrapper[4824]: [-]has-synced failed: reason withheld Feb 24 00:09:29 crc kubenswrapper[4824]: [+]process-running ok Feb 24 00:09:29 crc kubenswrapper[4824]: healthz check failed Feb 24 00:09:29 crc kubenswrapper[4824]: I0224 00:09:29.937866 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.021735 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:30 crc kubenswrapper[4824]: E0224 00:09:30.023072 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:30.523043348 +0000 UTC m=+234.512667817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.128944 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:30 crc kubenswrapper[4824]: E0224 00:09:30.129985 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:30.629609998 +0000 UTC m=+234.619234467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.230211 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:30 crc kubenswrapper[4824]: E0224 00:09:30.230684 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:30.730650814 +0000 UTC m=+234.720275283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.230872 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:30 crc kubenswrapper[4824]: E0224 00:09:30.231255 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:30.731246539 +0000 UTC m=+234.720871208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.234493 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjdhm" event={"ID":"44376c4d-d433-41a2-bdc1-22a9792e7640","Type":"ContainerStarted","Data":"99abc91f2913a10fef9474c1671213d7ae90cee98647992dc4c213a7e1946ca3"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.238314 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2jqvq" event={"ID":"a2e0a401-0fd7-499c-ac31-fc8cb0a64366","Type":"ContainerStarted","Data":"9f135bd87acd59182b077577cc14922f88a1e7d927ad4c74ec069a5ec970efae"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.239322 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq" event={"ID":"4cca2c2a-43ba-4b84-b10b-25053c6d7350","Type":"ContainerStarted","Data":"f72ae1a2e885ada97661c5a55ce0b906a0a7cc454e49a0034e9a4aa71134c651"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.240430 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwm9v" event={"ID":"250422bb-6e8f-4622-a456-ded5825e7c86","Type":"ContainerStarted","Data":"0d4991af3b6e9acf7e6d057cb38170c4c4c5e2bc88353af45ee41de342133baf"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.242804 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j" event={"ID":"a4a8a6d0-c052-4bca-9be8-dddd6d2ef017","Type":"ContainerStarted","Data":"aec93aaf970240a924b8e66b02bbe366e1193ded0ca6679ff5d203cf347100ea"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.243282 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.246294 4824 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-rvv9j container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.246370 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j" podUID="a4a8a6d0-c052-4bca-9be8-dddd6d2ef017" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.246715 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-p5tqf" event={"ID":"2f39cab4-77fc-4641-9e84-c01b0dedc300","Type":"ContainerStarted","Data":"2f01bda5b5764a5debd700f27975f9deec4ae0d662331f19e9e91caf8f764f9d"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.247083 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-p5tqf" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.248501 4824 patch_prober.go:28] interesting pod/console-operator-58897d9998-p5tqf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.248564 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-p5tqf" podUID="2f39cab4-77fc-4641-9e84-c01b0dedc300" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.249615 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-h7djl" event={"ID":"8118fe3c-1479-4634-9b64-9350991d909d","Type":"ContainerStarted","Data":"c96a3b6f872d0766c2756665e0f66c8eac91b9123623bf1c47ba4d185db1b859"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.253538 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q8hvw" event={"ID":"13bff804-f118-473b-a547-433aed671b46","Type":"ContainerStarted","Data":"30a0cc68b0a9066eb90264fe3ae3b8d4863e3b99d2e08267535de907d2363859"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.261950 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8dlbn" event={"ID":"bb795a58-0029-416c-84fa-ae83cf338858","Type":"ContainerStarted","Data":"0aa26fc0f3a9ea0792414af9e37e6f9705ebdd80ae567d265f309d783d7efa2a"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.265971 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq" event={"ID":"239fc97c-cb5a-4fa1-965e-7b64c90268ce","Type":"ContainerStarted","Data":"594ea7953af708dc6eec520d0cd46b08f1c6126425d4ad263d064dfe050100f2"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.272081 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lwm9v" podStartSLOduration=160.272060988 podStartE2EDuration="2m40.272060988s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:30.271890994 +0000 UTC m=+234.261515483" watchObservedRunningTime="2026-02-24 00:09:30.272060988 +0000 UTC m=+234.261685457" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.292645 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" event={"ID":"e312a49f-dc7a-49fc-9baf-3105fec587ae","Type":"ContainerStarted","Data":"1f7f84523e39d2e74db2895c5b1819295512a987f6083e74a45c4c25f78e706d"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.292955 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" event={"ID":"e312a49f-dc7a-49fc-9baf-3105fec587ae","Type":"ContainerStarted","Data":"1e7b695fbb51788dd119d9e0ae76024be2038ddb563a00cf87c9d5c4544df61f"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.293275 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.296340 4824 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-99tkw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.296389 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" podUID="e312a49f-dc7a-49fc-9baf-3105fec587ae" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.302865 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" event={"ID":"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115","Type":"ContainerStarted","Data":"5b26b0b907a714e0eb8fa1b65c28de4396d3dfbda124fc0f95f4d779730bf39c"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.304117 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.306144 4824 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jm7qk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.306501 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" podUID="49f90ffa-baa8-4d9b-bd17-3a12f9c6f115" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.308728 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-p5tqf" podStartSLOduration=160.308718298 podStartE2EDuration="2m40.308718298s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:30.289752501 +0000 UTC m=+234.279376990" watchObservedRunningTime="2026-02-24 00:09:30.308718298 +0000 UTC m=+234.298342757" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.311253 4824 generic.go:334] "Generic (PLEG): container finished" podID="f66ddecd-538b-48bd-a335-e7f99181daa0" containerID="f0f0a604c0fb7850469e77058191d27cb2fd20f4ef1e3681ea96e8b7c8d50a7f" exitCode=0 Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.311984 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xfl22" event={"ID":"f66ddecd-538b-48bd-a335-e7f99181daa0","Type":"ContainerDied","Data":"f0f0a604c0fb7850469e77058191d27cb2fd20f4ef1e3681ea96e8b7c8d50a7f"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.325201 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cxlfh" event={"ID":"ac86b042-947d-402f-a7c8-bb0a69d3f86e","Type":"ContainerStarted","Data":"ccb595331655ddf41e2d0ab8c44143a7e4f210527b1e8e9b9ea1629f52f44c0a"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.327968 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" event={"ID":"fc6c8d25-266d-4f40-9b7c-e1697b87db51","Type":"ContainerStarted","Data":"56f82db8dcee9fbb39506708eee0556997af7585f1ce2e526ee4589a83c6d9d1"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.330605 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2k2f" event={"ID":"5d3de326-8359-4a7c-84da-57a071a929d7","Type":"ContainerStarted","Data":"c881de585367ac2c53d1b5535bb3d04d188de0a29617c85c95b37cd9b4aeec95"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.331315 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:30 crc kubenswrapper[4824]: E0224 00:09:30.334364 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:30.834337369 +0000 UTC m=+234.823962028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.347266 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8dlbn" podStartSLOduration=160.347229736 podStartE2EDuration="2m40.347229736s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:30.309634472 +0000 UTC m=+234.299258951" watchObservedRunningTime="2026-02-24 00:09:30.347229736 +0000 UTC m=+234.336854215" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.355848 4824 generic.go:334] "Generic (PLEG): container finished" podID="836fad19-b7d1-434c-9fd8-faf3eb1d80d1" containerID="33e1400531edf7e2332b0284ef1eea808c68ad473f62614215212f0f5dbd1985" exitCode=0 Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.355936 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" event={"ID":"836fad19-b7d1-434c-9fd8-faf3eb1d80d1","Type":"ContainerDied","Data":"33e1400531edf7e2332b0284ef1eea808c68ad473f62614215212f0f5dbd1985"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.363227 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vvlvv" event={"ID":"757dc1d0-9507-4470-8496-9162b8999465","Type":"ContainerStarted","Data":"b83094e79c3ff6fccf0589bfe884a052268f987cc9a433c1d16cc35c9027f576"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.372638 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j" podStartSLOduration=160.372612511 podStartE2EDuration="2m40.372612511s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:30.360964236 +0000 UTC m=+234.350588705" watchObservedRunningTime="2026-02-24 00:09:30.372612511 +0000 UTC m=+234.362236990" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.382421 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq" podStartSLOduration=161.382403047 podStartE2EDuration="2m41.382403047s" podCreationTimestamp="2026-02-24 00:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:30.336011062 +0000 UTC m=+234.325635561" watchObservedRunningTime="2026-02-24 00:09:30.382403047 +0000 UTC m=+234.372027526" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.391596 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9bck" event={"ID":"a7389587-c14d-45bd-b642-4ba3b5d7ac41","Type":"ContainerStarted","Data":"a614ea537267f662a11e1f4e8d0541e96b3d72d0802ce9a2d96ff8aeaaca8b4d"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.398499 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-q8hvw" podStartSLOduration=160.398480048 podStartE2EDuration="2m40.398480048s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:30.396422404 +0000 UTC m=+234.386046883" watchObservedRunningTime="2026-02-24 00:09:30.398480048 +0000 UTC m=+234.388104517" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.420758 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" podStartSLOduration=160.420732581 podStartE2EDuration="2m40.420732581s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:30.42030612 +0000 UTC m=+234.409930579" watchObservedRunningTime="2026-02-24 00:09:30.420732581 +0000 UTC m=+234.410357050" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.429240 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8krrp" event={"ID":"cb0a4d10-0131-49ec-97f5-e77f1f222cdd","Type":"ContainerStarted","Data":"488579b043a6667caf4583cad8f042d6fa78b1ae65d09ae9fa6aecb15bb321b8"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.442972 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.443979 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ksmk7" event={"ID":"b14c3eec-796c-48b0-b4fe-67cb327f2de7","Type":"ContainerStarted","Data":"278ff91e5756c4337eeb078485e2880497cc0a4aec17577a57629a7131766cce"} Feb 24 00:09:30 crc kubenswrapper[4824]: E0224 00:09:30.445337 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:30.945321275 +0000 UTC m=+234.934945744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.469230 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4tvd9" event={"ID":"782d3fe9-7b5b-4d44-a3e6-0efea9d617ea","Type":"ContainerStarted","Data":"6dfc0c52c93437840e6a1b1efb9329928f54cfd73337fbdd42992283f6ab0e30"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.485914 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-v9bck" podStartSLOduration=160.485888357 podStartE2EDuration="2m40.485888357s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:30.479386196 +0000 UTC m=+234.469010685" watchObservedRunningTime="2026-02-24 00:09:30.485888357 +0000 UTC m=+234.475512826" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.504062 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5ltfz" event={"ID":"ac257861-33c1-4e92-9d58-bb7351f6316e","Type":"ContainerStarted","Data":"576206ef4cd543d1284208fe093beda1ae91b3259cc7628242b381f11e37105c"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.511749 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4" event={"ID":"54cdfa0a-fdb0-4509-9d56-01194a25ee63","Type":"ContainerStarted","Data":"40adc91feaed59d35fce2485124eb78147af78e7e249f09a95645b0bcff39f23"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.517613 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29531520-969xh" event={"ID":"f09bc4be-bc94-4c63-93ec-4bc2fef07d1b","Type":"ContainerStarted","Data":"6b2ac39d85326d80c4e57096bd6873f9064eac38a27f5eddf04bd260901e4edf"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.521839 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zlnwh" event={"ID":"5f4f79cd-ada9-4ec7-b779-94d97bdadc97","Type":"ContainerStarted","Data":"11795379a86e024e8f5b6ad9e9cbdb8472c4d324b7b1c3b1671d4ddf8d54e1cb"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.523364 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" event={"ID":"42d75b69-be96-43de-8687-444a81d8ebd5","Type":"ContainerStarted","Data":"682e056d710c3e51c0158c29dc5d66e10c283291a81afb4e4ab4cc742bfec0fc"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.528557 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv" event={"ID":"33028c7d-6bd8-46bb-a7d9-e1c46b0a26e4","Type":"ContainerStarted","Data":"a36a25639fb2445cd75d2d2db55ddd1407eaca6928184e9027fcf8f590aa73df"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.530017 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb" event={"ID":"c15a2653-454b-42e4-85b5-87b99cc30198","Type":"ContainerStarted","Data":"52a7ea0db83a5faab3d8ee5297e5291b0a41c601ee08c66bc2f4dddc021db8c1"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.539636 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74" event={"ID":"4349e271-7758-4dcf-9053-fbc984436a8b","Type":"ContainerStarted","Data":"1cb734d300d3a94c1fb6999fd12350fe2c76721ac4830bd6dc509bab9b28b256"} Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.540404 4824 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-jf5jw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.540478 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" podUID="6f8699c7-58f5-4a80-b5af-5403cb178676" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.541669 4824 patch_prober.go:28] interesting pod/downloads-7954f5f757-r4c4b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.542074 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r4c4b" podUID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.544543 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:30 crc kubenswrapper[4824]: E0224 00:09:30.544892 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:31.044867591 +0000 UTC m=+235.034492060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.545845 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:30 crc kubenswrapper[4824]: E0224 00:09:30.548192 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:31.048164807 +0000 UTC m=+235.037789276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.550336 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b2k2f" podStartSLOduration=160.550303573 podStartE2EDuration="2m40.550303573s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:30.538986627 +0000 UTC m=+234.528611106" watchObservedRunningTime="2026-02-24 00:09:30.550303573 +0000 UTC m=+234.539928042" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.552391 4824 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-9k27r container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.552466 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" podUID="1c4e1d48-7f8d-44b6-97b3-3ceccb35385b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.601906 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" podStartSLOduration=160.601885344 podStartE2EDuration="2m40.601885344s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:30.561749383 +0000 UTC m=+234.551373852" watchObservedRunningTime="2026-02-24 00:09:30.601885344 +0000 UTC m=+234.591509813" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.605166 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-vvlvv" podStartSLOduration=6.605153239 podStartE2EDuration="6.605153239s" podCreationTimestamp="2026-02-24 00:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:30.601778051 +0000 UTC m=+234.591402530" watchObservedRunningTime="2026-02-24 00:09:30.605153239 +0000 UTC m=+234.594777718" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.632011 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hl2rv" podStartSLOduration=160.631985592 podStartE2EDuration="2m40.631985592s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:30.631801357 +0000 UTC m=+234.621425846" watchObservedRunningTime="2026-02-24 00:09:30.631985592 +0000 UTC m=+234.621610061" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.649708 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:30 crc kubenswrapper[4824]: E0224 00:09:30.649931 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:31.149885341 +0000 UTC m=+235.139509820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.655227 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:30 crc kubenswrapper[4824]: E0224 00:09:30.655724 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:31.155708173 +0000 UTC m=+235.145332632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.672098 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-8krrp" podStartSLOduration=160.672073942 podStartE2EDuration="2m40.672073942s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:30.654172673 +0000 UTC m=+234.643797162" watchObservedRunningTime="2026-02-24 00:09:30.672073942 +0000 UTC m=+234.661698421" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.675713 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29531520-969xh" podStartSLOduration=161.675699686 podStartE2EDuration="2m41.675699686s" podCreationTimestamp="2026-02-24 00:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:30.67162824 +0000 UTC m=+234.661252719" watchObservedRunningTime="2026-02-24 00:09:30.675699686 +0000 UTC m=+234.665324155" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.756855 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:30 crc kubenswrapper[4824]: E0224 00:09:30.757060 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:31.257029396 +0000 UTC m=+235.246653865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.757222 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:30 crc kubenswrapper[4824]: E0224 00:09:30.757818 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:31.257809926 +0000 UTC m=+235.247434395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.857956 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:30 crc kubenswrapper[4824]: E0224 00:09:30.858246 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:31.358215185 +0000 UTC m=+235.347839664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.858429 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:30 crc kubenswrapper[4824]: E0224 00:09:30.858821 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:31.358812661 +0000 UTC m=+235.348437130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.901498 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:09:30 crc kubenswrapper[4824]: [-]has-synced failed: reason withheld Feb 24 00:09:30 crc kubenswrapper[4824]: [+]process-running ok Feb 24 00:09:30 crc kubenswrapper[4824]: healthz check failed Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.902041 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:30 crc kubenswrapper[4824]: I0224 00:09:30.960508 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:30 crc kubenswrapper[4824]: E0224 00:09:30.961091 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:31.461050328 +0000 UTC m=+235.450674797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.062585 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:31 crc kubenswrapper[4824]: E0224 00:09:31.063082 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:31.563061149 +0000 UTC m=+235.552685618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.163884 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:31 crc kubenswrapper[4824]: E0224 00:09:31.164129 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:31.664087804 +0000 UTC m=+235.653712273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.164183 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:31 crc kubenswrapper[4824]: E0224 00:09:31.164749 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:31.664739961 +0000 UTC m=+235.654364430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.265446 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:31 crc kubenswrapper[4824]: E0224 00:09:31.265715 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:31.765677764 +0000 UTC m=+235.755302233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.266168 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:31 crc kubenswrapper[4824]: E0224 00:09:31.266565 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:31.766555367 +0000 UTC m=+235.756179836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.367159 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:31 crc kubenswrapper[4824]: E0224 00:09:31.367622 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:31.867557741 +0000 UTC m=+235.857182270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.469631 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:31 crc kubenswrapper[4824]: E0224 00:09:31.470717 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:31.970698402 +0000 UTC m=+235.960322871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.550406 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ksmk7" event={"ID":"b14c3eec-796c-48b0-b4fe-67cb327f2de7","Type":"ContainerStarted","Data":"66df04767ad299bd1444bcd83a55f2280009eb8ca6a2e21f1daa8ca3500b808d"} Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.551079 4824 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-99tkw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.551124 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" podUID="e312a49f-dc7a-49fc-9baf-3105fec587ae" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.551325 4824 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-rvv9j container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.551375 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j" podUID="a4a8a6d0-c052-4bca-9be8-dddd6d2ef017" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.551325 4824 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jm7qk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.551436 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" podUID="49f90ffa-baa8-4d9b-bd17-3a12f9c6f115" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.551617 4824 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-jf5jw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.551646 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" podUID="6f8699c7-58f5-4a80-b5af-5403cb178676" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.552049 4824 patch_prober.go:28] interesting pod/console-operator-58897d9998-p5tqf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.552159 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-p5tqf" podUID="2f39cab4-77fc-4641-9e84-c01b0dedc300" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.571887 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:31 crc kubenswrapper[4824]: E0224 00:09:31.572364 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:32.072343353 +0000 UTC m=+236.061967832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.578467 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" podStartSLOduration=161.578451573 podStartE2EDuration="2m41.578451573s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:31.575073665 +0000 UTC m=+235.564698134" watchObservedRunningTime="2026-02-24 00:09:31.578451573 +0000 UTC m=+235.568076072" Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.678647 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:31 crc kubenswrapper[4824]: E0224 00:09:31.679115 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:32.179090618 +0000 UTC m=+236.168715127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.782295 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:31 crc kubenswrapper[4824]: E0224 00:09:31.782987 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:32.282944757 +0000 UTC m=+236.272569226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.884034 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:31 crc kubenswrapper[4824]: E0224 00:09:31.884466 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:32.384450625 +0000 UTC m=+236.374075094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.908486 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:09:31 crc kubenswrapper[4824]: [-]has-synced failed: reason withheld Feb 24 00:09:31 crc kubenswrapper[4824]: [+]process-running ok Feb 24 00:09:31 crc kubenswrapper[4824]: healthz check failed Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.908591 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.985430 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:31 crc kubenswrapper[4824]: E0224 00:09:31.985707 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:32.485665125 +0000 UTC m=+236.475289594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:31 crc kubenswrapper[4824]: I0224 00:09:31.985791 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:31 crc kubenswrapper[4824]: E0224 00:09:31.986215 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:32.486197369 +0000 UTC m=+236.475821848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.087225 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:32 crc kubenswrapper[4824]: E0224 00:09:32.087385 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:32.587354898 +0000 UTC m=+236.576979377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.088005 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:32 crc kubenswrapper[4824]: E0224 00:09:32.088444 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:32.588436636 +0000 UTC m=+236.578061105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.189626 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:32 crc kubenswrapper[4824]: E0224 00:09:32.189829 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:32.689802479 +0000 UTC m=+236.679426958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.190021 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:32 crc kubenswrapper[4824]: E0224 00:09:32.190376 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:32.690365084 +0000 UTC m=+236.679989553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.292645 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:32 crc kubenswrapper[4824]: E0224 00:09:32.292811 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:32.792782925 +0000 UTC m=+236.782407394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.292971 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:32 crc kubenswrapper[4824]: E0224 00:09:32.293379 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:32.793367801 +0000 UTC m=+236.782992270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.394621 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:32 crc kubenswrapper[4824]: E0224 00:09:32.395254 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:32.895237168 +0000 UTC m=+236.884861627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.496828 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:32 crc kubenswrapper[4824]: E0224 00:09:32.497402 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:32.997373932 +0000 UTC m=+236.986998581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.565940 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" event={"ID":"836fad19-b7d1-434c-9fd8-faf3eb1d80d1","Type":"ContainerStarted","Data":"1b6eef4295149ffe56c232ba193db1b543ea313748e795c0c178e010dfcfcb6c"} Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.568578 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-h7djl" event={"ID":"8118fe3c-1479-4634-9b64-9350991d909d","Type":"ContainerStarted","Data":"e5992154328924a0401a820d6dad00bd43a7e667fb2550eca5fc7b3d69e96618"} Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.570635 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5ltfz" event={"ID":"ac257861-33c1-4e92-9d58-bb7351f6316e","Type":"ContainerStarted","Data":"ecde68be0b336e882d427864e1dfa4386122dc37a0c4f9beae52cff5d43902a6"} Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.572016 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq" event={"ID":"4cca2c2a-43ba-4b84-b10b-25053c6d7350","Type":"ContainerStarted","Data":"2ae757ea979a7ac019bb332bc89653a5714f9b00d5796260819aae7e0428b02f"} Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.573454 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5n768" event={"ID":"511cd2d5-0160-44f2-adf1-acbe5c8c28cf","Type":"ContainerStarted","Data":"ec72dee84d18c234baaccba6cf3679b1174eff33b9ced0b7b28f253c0c80a43e"} Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.574828 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zlnwh" event={"ID":"5f4f79cd-ada9-4ec7-b779-94d97bdadc97","Type":"ContainerStarted","Data":"3dbc8a0425a8303cb08f42b8f408754fe6fb9b00ab2e049ad49dd4c9ef81e736"} Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.576571 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" event={"ID":"01ed973e-7ed7-41ec-bea9-69d8c86e19ed","Type":"ContainerStarted","Data":"76597979f779da55dc760372f9bfeeec71c0717a041440cee55acfa0e56598b2"} Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.578893 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c94hb" event={"ID":"c3e3160a-60c7-424c-b5a3-53841213467d","Type":"ContainerStarted","Data":"29ae1f2b3deee60e623dca5b325a17e8dcc7f86420b3a5605bec9d7d65816185"} Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.580471 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6h296" event={"ID":"67b600b4-d056-4b5f-b75e-0502de432461","Type":"ContainerStarted","Data":"c75b9acd0e6219277e1ed8cdeac6bc1c61b521997b9fb4d347a6fda7ba8b1f52"} Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.582728 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cxlfh" event={"ID":"ac86b042-947d-402f-a7c8-bb0a69d3f86e","Type":"ContainerStarted","Data":"ef7da6be8daeff3b4ee3adeb6ad4fdc1236a2d3702c75102fc84a3334d09e95d"} Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.585285 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-kh6hg" event={"ID":"53344821-2f26-459a-9e42-003f3f1b5a87","Type":"ContainerStarted","Data":"bb408f9716b85b0696c904977b4c9810441237ecfb99dda99024bcc78be24752"} Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.588009 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2jqvq" event={"ID":"a2e0a401-0fd7-499c-ac31-fc8cb0a64366","Type":"ContainerStarted","Data":"1fe87021265dbd7c6409775966557700181ff3808a6e5e56b264250598564bcc"} Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.590866 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xfl22" event={"ID":"f66ddecd-538b-48bd-a335-e7f99181daa0","Type":"ContainerStarted","Data":"ee640a5eba3acb5836b6e55a2210f8f9e73aeeb6b9711c79c0bdef1c3f9fb7b6"} Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.594755 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74" event={"ID":"4349e271-7758-4dcf-9053-fbc984436a8b","Type":"ContainerStarted","Data":"717c6aab294c093e72a42685730d12eb9b72c40bb4cd1218cac66c674982e359"} Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.597770 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:32 crc kubenswrapper[4824]: E0224 00:09:32.598123 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:33.098038218 +0000 UTC m=+237.087662777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.598785 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.598899 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjdhm" event={"ID":"44376c4d-d433-41a2-bdc1-22a9792e7640","Type":"ContainerStarted","Data":"05abb37f3bcb860ede06a3342411c0de6204f9dfcbc0bd6880e1f021469ed20a"} Feb 24 00:09:32 crc kubenswrapper[4824]: E0224 00:09:32.599236 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:33.099216829 +0000 UTC m=+237.088841298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.600895 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb" event={"ID":"c15a2653-454b-42e4-85b5-87b99cc30198","Type":"ContainerStarted","Data":"6fd95aec2574f28f3bc3f804569fbe48cd52370c2ad0f28f3a01b83682176761"} Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.602356 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4tvd9" event={"ID":"782d3fe9-7b5b-4d44-a3e6-0efea9d617ea","Type":"ContainerStarted","Data":"cc79de462b5a5b9e6de23c9332363aaa2ead5c5d34eddcfbb2be9e7e0cef65ce"} Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.605654 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4" event={"ID":"54cdfa0a-fdb0-4509-9d56-01194a25ee63","Type":"ContainerStarted","Data":"4ca6cf6c4ebab436114b68df61d02fb23d08e62e532d3c479d8cbfefd9da8122"} Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.606384 4824 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jm7qk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.606417 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" podUID="49f90ffa-baa8-4d9b-bd17-3a12f9c6f115" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.628888 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-ksmk7" podStartSLOduration=162.628864585 podStartE2EDuration="2m42.628864585s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:32.622858578 +0000 UTC m=+236.612483077" watchObservedRunningTime="2026-02-24 00:09:32.628864585 +0000 UTC m=+236.618489064" Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.704455 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:32 crc kubenswrapper[4824]: E0224 00:09:32.706182 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:33.206158929 +0000 UTC m=+237.195783398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.806737 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:32 crc kubenswrapper[4824]: E0224 00:09:32.807289 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:33.307266956 +0000 UTC m=+237.296891425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.903233 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:09:32 crc kubenswrapper[4824]: [-]has-synced failed: reason withheld Feb 24 00:09:32 crc kubenswrapper[4824]: [+]process-running ok Feb 24 00:09:32 crc kubenswrapper[4824]: healthz check failed Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.903309 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:32 crc kubenswrapper[4824]: I0224 00:09:32.908264 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:32 crc kubenswrapper[4824]: E0224 00:09:32.908693 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:33.408678891 +0000 UTC m=+237.398303360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.010612 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:33 crc kubenswrapper[4824]: E0224 00:09:33.011161 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:33.511132274 +0000 UTC m=+237.500756753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.112675 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:33 crc kubenswrapper[4824]: E0224 00:09:33.112881 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:33.612852167 +0000 UTC m=+237.602476646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.113456 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:33 crc kubenswrapper[4824]: E0224 00:09:33.113972 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:33.613950736 +0000 UTC m=+237.603575195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.215539 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:33 crc kubenswrapper[4824]: E0224 00:09:33.215826 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:33.715787152 +0000 UTC m=+237.705411661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.216161 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:33 crc kubenswrapper[4824]: E0224 00:09:33.216790 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:33.716766228 +0000 UTC m=+237.706390727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.317108 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:33 crc kubenswrapper[4824]: E0224 00:09:33.317628 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:33.817567907 +0000 UTC m=+237.807192426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.419571 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:33 crc kubenswrapper[4824]: E0224 00:09:33.420088 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:33.9200417 +0000 UTC m=+237.909666169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.525594 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:33 crc kubenswrapper[4824]: E0224 00:09:33.526346 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:34.026305383 +0000 UTC m=+238.015929852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.527265 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:33 crc kubenswrapper[4824]: E0224 00:09:33.527693 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:34.027681319 +0000 UTC m=+238.017305788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.628492 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:33 crc kubenswrapper[4824]: E0224 00:09:33.628865 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:34.128819007 +0000 UTC m=+238.118443516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.629287 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:33 crc kubenswrapper[4824]: E0224 00:09:33.629647 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:34.129631968 +0000 UTC m=+238.119256437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.739718 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:33 crc kubenswrapper[4824]: E0224 00:09:33.739931 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:34.239888565 +0000 UTC m=+238.229513034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.740217 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:33 crc kubenswrapper[4824]: E0224 00:09:33.740692 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:34.240683936 +0000 UTC m=+238.230308405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.842120 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:33 crc kubenswrapper[4824]: E0224 00:09:33.842386 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:34.342334427 +0000 UTC m=+238.331958906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.842903 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:33 crc kubenswrapper[4824]: E0224 00:09:33.843302 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:34.343285302 +0000 UTC m=+238.332909771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.901078 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:09:33 crc kubenswrapper[4824]: [-]has-synced failed: reason withheld Feb 24 00:09:33 crc kubenswrapper[4824]: [+]process-running ok Feb 24 00:09:33 crc kubenswrapper[4824]: healthz check failed Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.901215 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.943834 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:33 crc kubenswrapper[4824]: E0224 00:09:33.944163 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:34.444099492 +0000 UTC m=+238.433723961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:33 crc kubenswrapper[4824]: I0224 00:09:33.944354 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:33 crc kubenswrapper[4824]: E0224 00:09:33.944867 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:34.444845391 +0000 UTC m=+238.434469880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.046061 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:34 crc kubenswrapper[4824]: E0224 00:09:34.046311 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:34.546265347 +0000 UTC m=+238.535889826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.046694 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:34 crc kubenswrapper[4824]: E0224 00:09:34.047082 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:34.547067028 +0000 UTC m=+238.536691497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.148308 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:34 crc kubenswrapper[4824]: E0224 00:09:34.148665 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:34.648620947 +0000 UTC m=+238.638245426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.148998 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:34 crc kubenswrapper[4824]: E0224 00:09:34.149565 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:34.649553291 +0000 UTC m=+238.639177810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.249904 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:34 crc kubenswrapper[4824]: E0224 00:09:34.250373 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:34.75035591 +0000 UTC m=+238.739980379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.391206 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:34 crc kubenswrapper[4824]: E0224 00:09:34.392066 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:34.89204874 +0000 UTC m=+238.881673209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.493640 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:34 crc kubenswrapper[4824]: E0224 00:09:34.495875 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:34.995848588 +0000 UTC m=+238.985473057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.595627 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:34 crc kubenswrapper[4824]: E0224 00:09:34.596136 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:35.096119283 +0000 UTC m=+239.085743752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.650287 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-cxlfh" podStartSLOduration=164.650258531 podStartE2EDuration="2m44.650258531s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:34.648045113 +0000 UTC m=+238.637669582" watchObservedRunningTime="2026-02-24 00:09:34.650258531 +0000 UTC m=+238.639883010" Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.668661 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2jqvq" podStartSLOduration=164.668636062 podStartE2EDuration="2m44.668636062s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:34.663357914 +0000 UTC m=+238.652982383" watchObservedRunningTime="2026-02-24 00:09:34.668636062 +0000 UTC m=+238.658260531" Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.696984 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:34 crc kubenswrapper[4824]: E0224 00:09:34.697394 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:35.197372934 +0000 UTC m=+239.186997403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.706164 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-w6pjq" podStartSLOduration=164.706150334 podStartE2EDuration="2m44.706150334s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:34.70561713 +0000 UTC m=+238.695241609" watchObservedRunningTime="2026-02-24 00:09:34.706150334 +0000 UTC m=+238.695774803" Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.725077 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c94hb" podStartSLOduration=164.725053699 podStartE2EDuration="2m44.725053699s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:34.724606618 +0000 UTC m=+238.714231087" watchObservedRunningTime="2026-02-24 00:09:34.725053699 +0000 UTC m=+238.714678168" Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.740043 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6h296" podStartSLOduration=164.740018681 podStartE2EDuration="2m44.740018681s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:34.739351214 +0000 UTC m=+238.728975683" watchObservedRunningTime="2026-02-24 00:09:34.740018681 +0000 UTC m=+238.729643150" Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.797170 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" podStartSLOduration=164.797145867 podStartE2EDuration="2m44.797145867s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:34.796585512 +0000 UTC m=+238.786210001" watchObservedRunningTime="2026-02-24 00:09:34.797145867 +0000 UTC m=+238.786770336" Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.799034 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:34 crc kubenswrapper[4824]: E0224 00:09:34.804010 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:35.303983346 +0000 UTC m=+239.293608055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.828842 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-kh6hg" podStartSLOduration=164.828820306 podStartE2EDuration="2m44.828820306s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:34.828356944 +0000 UTC m=+238.817981413" watchObservedRunningTime="2026-02-24 00:09:34.828820306 +0000 UTC m=+238.818444775" Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.864980 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" podStartSLOduration=164.864954102 podStartE2EDuration="2m44.864954102s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:34.864659635 +0000 UTC m=+238.854284194" watchObservedRunningTime="2026-02-24 00:09:34.864954102 +0000 UTC m=+238.854578571" Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.883220 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tjdhm" podStartSLOduration=164.88318385 podStartE2EDuration="2m44.88318385s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:34.882748788 +0000 UTC m=+238.872373267" watchObservedRunningTime="2026-02-24 00:09:34.88318385 +0000 UTC m=+238.872808329" Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.902615 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:34 crc kubenswrapper[4824]: E0224 00:09:34.903119 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:35.403097991 +0000 UTC m=+239.392722460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.909814 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vgr74" podStartSLOduration=165.909790896 podStartE2EDuration="2m45.909790896s" podCreationTimestamp="2026-02-24 00:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:34.907901247 +0000 UTC m=+238.897525716" watchObservedRunningTime="2026-02-24 00:09:34.909790896 +0000 UTC m=+238.899415365" Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.916218 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:09:34 crc kubenswrapper[4824]: [-]has-synced failed: reason withheld Feb 24 00:09:34 crc kubenswrapper[4824]: [+]process-running ok Feb 24 00:09:34 crc kubenswrapper[4824]: healthz check failed Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.916301 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.937948 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-zlnwh" podStartSLOduration=164.937928373 podStartE2EDuration="2m44.937928373s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:34.936474465 +0000 UTC m=+238.926098934" watchObservedRunningTime="2026-02-24 00:09:34.937928373 +0000 UTC m=+238.927552842" Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.973718 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4tvd9" podStartSLOduration=10.973696329 podStartE2EDuration="10.973696329s" podCreationTimestamp="2026-02-24 00:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:34.972398135 +0000 UTC m=+238.962022604" watchObservedRunningTime="2026-02-24 00:09:34.973696329 +0000 UTC m=+238.963320798" Feb 24 00:09:34 crc kubenswrapper[4824]: I0224 00:09:34.997507 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-h7djl" podStartSLOduration=164.997480542 podStartE2EDuration="2m44.997480542s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:34.995368687 +0000 UTC m=+238.984993166" watchObservedRunningTime="2026-02-24 00:09:34.997480542 +0000 UTC m=+238.987105021" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.010428 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:35 crc kubenswrapper[4824]: E0224 00:09:35.010924 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:35.510899894 +0000 UTC m=+239.500524433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.035839 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb" podStartSLOduration=165.035809196 podStartE2EDuration="2m45.035809196s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:35.031718109 +0000 UTC m=+239.021342598" watchObservedRunningTime="2026-02-24 00:09:35.035809196 +0000 UTC m=+239.025433665" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.111867 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:35 crc kubenswrapper[4824]: E0224 00:09:35.112074 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:35.612036862 +0000 UTC m=+239.601661331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.112134 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:35 crc kubenswrapper[4824]: E0224 00:09:35.112538 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:35.612528695 +0000 UTC m=+239.602153164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.214017 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:35 crc kubenswrapper[4824]: E0224 00:09:35.214218 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:35.714172586 +0000 UTC m=+239.703797065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.214425 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:35 crc kubenswrapper[4824]: E0224 00:09:35.215011 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:35.714985257 +0000 UTC m=+239.704609906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.236012 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.236906 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.242984 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.243562 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.261466 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.315554 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:35 crc kubenswrapper[4824]: E0224 00:09:35.315809 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:35.815762426 +0000 UTC m=+239.805386905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.315875 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:35 crc kubenswrapper[4824]: E0224 00:09:35.316565 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:35.816546797 +0000 UTC m=+239.806171266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.371686 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.373538 4824 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9ml5g container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.373614 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" podUID="01ed973e-7ed7-41ec-bea9-69d8c86e19ed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.373665 4824 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9ml5g container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.373748 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" podUID="01ed973e-7ed7-41ec-bea9-69d8c86e19ed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.417439 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.417691 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/671ced26-8fac-4a17-a516-ab23ebcd6945-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"671ced26-8fac-4a17-a516-ab23ebcd6945\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.417724 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/671ced26-8fac-4a17-a516-ab23ebcd6945-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"671ced26-8fac-4a17-a516-ab23ebcd6945\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:09:35 crc kubenswrapper[4824]: E0224 00:09:35.417878 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:35.917861579 +0000 UTC m=+239.907486048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.519548 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/671ced26-8fac-4a17-a516-ab23ebcd6945-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"671ced26-8fac-4a17-a516-ab23ebcd6945\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.519640 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/671ced26-8fac-4a17-a516-ab23ebcd6945-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"671ced26-8fac-4a17-a516-ab23ebcd6945\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.519700 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/671ced26-8fac-4a17-a516-ab23ebcd6945-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"671ced26-8fac-4a17-a516-ab23ebcd6945\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.519756 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:35 crc kubenswrapper[4824]: E0224 00:09:35.520462 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:36.020448975 +0000 UTC m=+240.010073444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.643824 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:35 crc kubenswrapper[4824]: E0224 00:09:35.644306 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:36.144286268 +0000 UTC m=+240.133910737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.654410 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5ltfz" event={"ID":"ac257861-33c1-4e92-9d58-bb7351f6316e","Type":"ContainerStarted","Data":"0ed20bbbc63392a3eeae8c7d2fd31b71d7f1c1c0e7cf28ce11803b7eefc35248"} Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.664381 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/671ced26-8fac-4a17-a516-ab23ebcd6945-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"671ced26-8fac-4a17-a516-ab23ebcd6945\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.666880 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4" event={"ID":"54cdfa0a-fdb0-4509-9d56-01194a25ee63","Type":"ContainerStarted","Data":"490ef12a95d77d56bcb6473530e8535785604863f20cce732613ee074d0bbee8"} Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.669666 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5n768" event={"ID":"511cd2d5-0160-44f2-adf1-acbe5c8c28cf","Type":"ContainerStarted","Data":"e980e0e1e456ea4385f34dec4d8a53291d7e56983ca3dd4e91532d20097f7188"} Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.670198 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5n768" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.673820 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xfl22" event={"ID":"f66ddecd-538b-48bd-a335-e7f99181daa0","Type":"ContainerStarted","Data":"cdb56e67daad25b19cf0044f1b485f89750f7982e1f6b323aa8a0e2cc9471aaa"} Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.673916 4824 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-9ml5g container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.673955 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" podUID="01ed973e-7ed7-41ec-bea9-69d8c86e19ed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.697125 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5ltfz" podStartSLOduration=165.69709694 podStartE2EDuration="2m45.69709694s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:35.689709787 +0000 UTC m=+239.679334276" watchObservedRunningTime="2026-02-24 00:09:35.69709694 +0000 UTC m=+239.686721409" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.745774 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:35 crc kubenswrapper[4824]: E0224 00:09:35.746362 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:36.246341229 +0000 UTC m=+240.235965698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.756648 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xfl22" podStartSLOduration=166.756623498 podStartE2EDuration="2m46.756623498s" podCreationTimestamp="2026-02-24 00:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:35.731195562 +0000 UTC m=+239.720820041" watchObservedRunningTime="2026-02-24 00:09:35.756623498 +0000 UTC m=+239.746247967" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.757324 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5n768" podStartSLOduration=11.757316936 podStartE2EDuration="11.757316936s" podCreationTimestamp="2026-02-24 00:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:35.752939892 +0000 UTC m=+239.742564371" watchObservedRunningTime="2026-02-24 00:09:35.757316936 +0000 UTC m=+239.746941405" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.847674 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:35 crc kubenswrapper[4824]: E0224 00:09:35.849415 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:36.349386447 +0000 UTC m=+240.339010916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.855175 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.917814 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:09:35 crc kubenswrapper[4824]: [-]has-synced failed: reason withheld Feb 24 00:09:35 crc kubenswrapper[4824]: [+]process-running ok Feb 24 00:09:35 crc kubenswrapper[4824]: healthz check failed Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.917867 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:35 crc kubenswrapper[4824]: I0224 00:09:35.949478 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:35 crc kubenswrapper[4824]: E0224 00:09:35.950111 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:36.450080933 +0000 UTC m=+240.439705622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.050775 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:36 crc kubenswrapper[4824]: E0224 00:09:36.051033 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:36.550989955 +0000 UTC m=+240.540614424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.051376 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:36 crc kubenswrapper[4824]: E0224 00:09:36.051807 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:36.551790756 +0000 UTC m=+240.541415405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.153362 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:36 crc kubenswrapper[4824]: E0224 00:09:36.153606 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:36.653573161 +0000 UTC m=+240.643197630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.153943 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:36 crc kubenswrapper[4824]: E0224 00:09:36.154290 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:36.65427552 +0000 UTC m=+240.643899989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.255557 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:36 crc kubenswrapper[4824]: E0224 00:09:36.255971 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:36.755931641 +0000 UTC m=+240.745556110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.256392 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:36 crc kubenswrapper[4824]: E0224 00:09:36.256839 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:36.756829165 +0000 UTC m=+240.746453624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.358354 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:36 crc kubenswrapper[4824]: E0224 00:09:36.358690 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:36.858670932 +0000 UTC m=+240.848295391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.370892 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.386338 4824 patch_prober.go:28] interesting pod/downloads-7954f5f757-r4c4b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.386409 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r4c4b" podUID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.386449 4824 patch_prober.go:28] interesting pod/downloads-7954f5f757-r4c4b container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.386534 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-r4c4b" podUID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.405359 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ghwq4" podStartSLOduration=166.405330053 podStartE2EDuration="2m46.405330053s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:35.788802431 +0000 UTC m=+239.778426900" watchObservedRunningTime="2026-02-24 00:09:36.405330053 +0000 UTC m=+240.394954522" Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.459656 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:36 crc kubenswrapper[4824]: E0224 00:09:36.460063 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:36.960046766 +0000 UTC m=+240.949671235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.509343 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.516407 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.533615 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.565754 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:36 crc kubenswrapper[4824]: E0224 00:09:36.567548 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:37.067509079 +0000 UTC m=+241.057133548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.582921 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.582954 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.670597 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:36 crc kubenswrapper[4824]: E0224 00:09:36.672565 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:37.17254661 +0000 UTC m=+241.162171079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.734424 4824 csr.go:261] certificate signing request csr-bjltj is approved, waiting to be issued Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.734754 4824 csr.go:257] certificate signing request csr-bjltj is issued Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.773777 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:36 crc kubenswrapper[4824]: E0224 00:09:36.774773 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:37.274690624 +0000 UTC m=+241.264315153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.795923 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"671ced26-8fac-4a17-a516-ab23ebcd6945","Type":"ContainerStarted","Data":"dc2fc5cc3054549841a2254f5e01ca95caee472926cb159e358b999488865ad7"} Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.802194 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" event={"ID":"42d75b69-be96-43de-8687-444a81d8ebd5","Type":"ContainerStarted","Data":"338a12386f0317dce096ca7a1165344a983be9677fb91c8d31897c5133dbe942"} Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.877301 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:36 crc kubenswrapper[4824]: E0224 00:09:36.877676 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:37.37766363 +0000 UTC m=+241.367288099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.897869 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.907117 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:09:36 crc kubenswrapper[4824]: [-]has-synced failed: reason withheld Feb 24 00:09:36 crc kubenswrapper[4824]: [+]process-running ok Feb 24 00:09:36 crc kubenswrapper[4824]: healthz check failed Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.907591 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.979125 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:36 crc kubenswrapper[4824]: E0224 00:09:36.979251 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:37.47923385 +0000 UTC m=+241.468858309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:36 crc kubenswrapper[4824]: I0224 00:09:36.981955 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:36 crc kubenswrapper[4824]: E0224 00:09:36.982470 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:37.482461484 +0000 UTC m=+241.472085953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.035457 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.036067 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.038329 4824 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xfl22 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.34:8443/livez\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.038381 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xfl22" podUID="f66ddecd-538b-48bd-a335-e7f99181daa0" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.34:8443/livez\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.082567 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:37 crc kubenswrapper[4824]: E0224 00:09:37.082813 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:37.582775541 +0000 UTC m=+241.572400010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.083093 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:37 crc kubenswrapper[4824]: E0224 00:09:37.083559 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:37.583543831 +0000 UTC m=+241.573168300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.107664 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-p5tqf" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.184645 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:37 crc kubenswrapper[4824]: E0224 00:09:37.184874 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:37.684840273 +0000 UTC m=+241.674464742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.185062 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:37 crc kubenswrapper[4824]: E0224 00:09:37.185493 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:37.68547827 +0000 UTC m=+241.675102739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.186453 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.187822 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.195112 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.195865 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.197265 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-rvv9j" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.208546 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.274996 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dmjz7"] Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.282570 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dmjz7" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.285922 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.286142 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/393dd5ac-a813-412e-ac2d-1d654d3e5c64-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"393dd5ac-a813-412e-ac2d-1d654d3e5c64\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.286219 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/393dd5ac-a813-412e-ac2d-1d654d3e5c64-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"393dd5ac-a813-412e-ac2d-1d654d3e5c64\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:09:37 crc kubenswrapper[4824]: E0224 00:09:37.286493 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:37.786456404 +0000 UTC m=+241.776080873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.292545 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.299195 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.316562 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.320754 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dmjz7"] Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.334070 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.349628 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2jqvq" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.350762 4824 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-jkghx container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 24 00:09:37 crc kubenswrapper[4824]: [+]log ok Feb 24 00:09:37 crc kubenswrapper[4824]: [+]etcd ok Feb 24 00:09:37 crc kubenswrapper[4824]: [+]etcd-readiness ok Feb 24 00:09:37 crc kubenswrapper[4824]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 24 00:09:37 crc kubenswrapper[4824]: [-]informer-sync failed: reason withheld Feb 24 00:09:37 crc kubenswrapper[4824]: [+]poststarthook/generic-apiserver-start-informers ok Feb 24 00:09:37 crc kubenswrapper[4824]: [+]poststarthook/max-in-flight-filter ok Feb 24 00:09:37 crc kubenswrapper[4824]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 24 00:09:37 crc kubenswrapper[4824]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 24 00:09:37 crc kubenswrapper[4824]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 24 00:09:37 crc kubenswrapper[4824]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 24 00:09:37 crc kubenswrapper[4824]: [+]shutdown ok Feb 24 00:09:37 crc kubenswrapper[4824]: readyz check failed Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.350858 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" podUID="836fad19-b7d1-434c-9fd8-faf3eb1d80d1" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.364712 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.364752 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.373934 4824 patch_prober.go:28] interesting pod/console-f9d7485db-zlnwh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.374016 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zlnwh" podUID="5f4f79cd-ada9-4ec7-b779-94d97bdadc97" containerName="console" probeResult="failure" output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.388867 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxnlm\" (UniqueName: \"kubernetes.io/projected/cc119514-5c95-4925-8a1a-3e6844a34e1e-kube-api-access-bxnlm\") pod \"certified-operators-dmjz7\" (UID: \"cc119514-5c95-4925-8a1a-3e6844a34e1e\") " pod="openshift-marketplace/certified-operators-dmjz7" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.388919 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc119514-5c95-4925-8a1a-3e6844a34e1e-catalog-content\") pod \"certified-operators-dmjz7\" (UID: \"cc119514-5c95-4925-8a1a-3e6844a34e1e\") " pod="openshift-marketplace/certified-operators-dmjz7" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.388943 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/393dd5ac-a813-412e-ac2d-1d654d3e5c64-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"393dd5ac-a813-412e-ac2d-1d654d3e5c64\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.389010 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/393dd5ac-a813-412e-ac2d-1d654d3e5c64-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"393dd5ac-a813-412e-ac2d-1d654d3e5c64\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.389071 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc119514-5c95-4925-8a1a-3e6844a34e1e-utilities\") pod \"certified-operators-dmjz7\" (UID: \"cc119514-5c95-4925-8a1a-3e6844a34e1e\") " pod="openshift-marketplace/certified-operators-dmjz7" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.389143 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:37 crc kubenswrapper[4824]: E0224 00:09:37.389538 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:37.889504522 +0000 UTC m=+241.879128991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.389669 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/393dd5ac-a813-412e-ac2d-1d654d3e5c64-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"393dd5ac-a813-412e-ac2d-1d654d3e5c64\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.470250 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hhftg"] Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.480930 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hhftg" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.490673 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.491030 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxnlm\" (UniqueName: \"kubernetes.io/projected/cc119514-5c95-4925-8a1a-3e6844a34e1e-kube-api-access-bxnlm\") pod \"certified-operators-dmjz7\" (UID: \"cc119514-5c95-4925-8a1a-3e6844a34e1e\") " pod="openshift-marketplace/certified-operators-dmjz7" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.491053 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc119514-5c95-4925-8a1a-3e6844a34e1e-catalog-content\") pod \"certified-operators-dmjz7\" (UID: \"cc119514-5c95-4925-8a1a-3e6844a34e1e\") " pod="openshift-marketplace/certified-operators-dmjz7" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.491203 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc119514-5c95-4925-8a1a-3e6844a34e1e-utilities\") pod \"certified-operators-dmjz7\" (UID: \"cc119514-5c95-4925-8a1a-3e6844a34e1e\") " pod="openshift-marketplace/certified-operators-dmjz7" Feb 24 00:09:37 crc kubenswrapper[4824]: E0224 00:09:37.492678 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:37.992659633 +0000 UTC m=+241.982284102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.494213 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc119514-5c95-4925-8a1a-3e6844a34e1e-catalog-content\") pod \"certified-operators-dmjz7\" (UID: \"cc119514-5c95-4925-8a1a-3e6844a34e1e\") " pod="openshift-marketplace/certified-operators-dmjz7" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.500029 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/393dd5ac-a813-412e-ac2d-1d654d3e5c64-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"393dd5ac-a813-412e-ac2d-1d654d3e5c64\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.504032 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc119514-5c95-4925-8a1a-3e6844a34e1e-utilities\") pod \"certified-operators-dmjz7\" (UID: \"cc119514-5c95-4925-8a1a-3e6844a34e1e\") " pod="openshift-marketplace/certified-operators-dmjz7" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.508233 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.541797 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.567142 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.593580 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e306ddf-071d-47f2-b9b1-bf772963438e-utilities\") pod \"community-operators-hhftg\" (UID: \"3e306ddf-071d-47f2-b9b1-bf772963438e\") " pod="openshift-marketplace/community-operators-hhftg" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.593692 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.593750 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e306ddf-071d-47f2-b9b1-bf772963438e-catalog-content\") pod \"community-operators-hhftg\" (UID: \"3e306ddf-071d-47f2-b9b1-bf772963438e\") " pod="openshift-marketplace/community-operators-hhftg" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.593780 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndbcf\" (UniqueName: \"kubernetes.io/projected/3e306ddf-071d-47f2-b9b1-bf772963438e-kube-api-access-ndbcf\") pod \"community-operators-hhftg\" (UID: \"3e306ddf-071d-47f2-b9b1-bf772963438e\") " pod="openshift-marketplace/community-operators-hhftg" Feb 24 00:09:37 crc kubenswrapper[4824]: E0224 00:09:37.594114 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:38.094098839 +0000 UTC m=+242.083723308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.651385 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xs7nb" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.696446 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.697107 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e306ddf-071d-47f2-b9b1-bf772963438e-catalog-content\") pod \"community-operators-hhftg\" (UID: \"3e306ddf-071d-47f2-b9b1-bf772963438e\") " pod="openshift-marketplace/community-operators-hhftg" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.697139 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndbcf\" (UniqueName: \"kubernetes.io/projected/3e306ddf-071d-47f2-b9b1-bf772963438e-kube-api-access-ndbcf\") pod \"community-operators-hhftg\" (UID: \"3e306ddf-071d-47f2-b9b1-bf772963438e\") " pod="openshift-marketplace/community-operators-hhftg" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.697195 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e306ddf-071d-47f2-b9b1-bf772963438e-utilities\") pod \"community-operators-hhftg\" (UID: \"3e306ddf-071d-47f2-b9b1-bf772963438e\") " pod="openshift-marketplace/community-operators-hhftg" Feb 24 00:09:37 crc kubenswrapper[4824]: E0224 00:09:37.698123 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:38.198095132 +0000 UTC m=+242.187719601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.701635 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e306ddf-071d-47f2-b9b1-bf772963438e-catalog-content\") pod \"community-operators-hhftg\" (UID: \"3e306ddf-071d-47f2-b9b1-bf772963438e\") " pod="openshift-marketplace/community-operators-hhftg" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.715576 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e306ddf-071d-47f2-b9b1-bf772963438e-utilities\") pod \"community-operators-hhftg\" (UID: \"3e306ddf-071d-47f2-b9b1-bf772963438e\") " pod="openshift-marketplace/community-operators-hhftg" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.736354 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 00:04:36 +0000 UTC, rotation deadline is 2027-01-12 08:09:59.953671046 +0000 UTC Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.736402 4824 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7736h0m22.217271971s for next certificate rotation Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.801544 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:37 crc kubenswrapper[4824]: E0224 00:09:37.802215 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:38.302194527 +0000 UTC m=+242.291818996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.849833 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"671ced26-8fac-4a17-a516-ab23ebcd6945","Type":"ContainerStarted","Data":"c1a2c2c662a6d26367b757b336ffe84f28d97409d74d7d15771e02012f07ca5b"} Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.861607 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hhftg"] Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.872574 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6kgrd"] Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.874635 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kgrd" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.879348 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndbcf\" (UniqueName: \"kubernetes.io/projected/3e306ddf-071d-47f2-b9b1-bf772963438e-kube-api-access-ndbcf\") pod \"community-operators-hhftg\" (UID: \"3e306ddf-071d-47f2-b9b1-bf772963438e\") " pod="openshift-marketplace/community-operators-hhftg" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.883505 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxnlm\" (UniqueName: \"kubernetes.io/projected/cc119514-5c95-4925-8a1a-3e6844a34e1e-kube-api-access-bxnlm\") pod \"certified-operators-dmjz7\" (UID: \"cc119514-5c95-4925-8a1a-3e6844a34e1e\") " pod="openshift-marketplace/certified-operators-dmjz7" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.895608 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6kgrd"] Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.899383 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bfhcg"] Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.905590 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:37 crc kubenswrapper[4824]: E0224 00:09:37.906161 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:38.406135509 +0000 UTC m=+242.395759978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.917501 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:09:37 crc kubenswrapper[4824]: [-]has-synced failed: reason withheld Feb 24 00:09:37 crc kubenswrapper[4824]: [+]process-running ok Feb 24 00:09:37 crc kubenswrapper[4824]: healthz check failed Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.917579 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.919308 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dmjz7" Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.921777 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" event={"ID":"42d75b69-be96-43de-8687-444a81d8ebd5","Type":"ContainerStarted","Data":"a741aa07fb76f7ffe35f231ee36df68a7e9c2e0f768faf293f22c13513da4d44"} Feb 24 00:09:37 crc kubenswrapper[4824]: I0224 00:09:37.921930 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfhcg" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.005225 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bfhcg"] Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.024216 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4-utilities\") pod \"certified-operators-6kgrd\" (UID: \"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4\") " pod="openshift-marketplace/certified-operators-6kgrd" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.024271 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b00860ed-9085-40bb-9041-16eac6d88fb1-utilities\") pod \"community-operators-bfhcg\" (UID: \"b00860ed-9085-40bb-9041-16eac6d88fb1\") " pod="openshift-marketplace/community-operators-bfhcg" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.024310 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c94kl\" (UniqueName: \"kubernetes.io/projected/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4-kube-api-access-c94kl\") pod \"certified-operators-6kgrd\" (UID: \"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4\") " pod="openshift-marketplace/certified-operators-6kgrd" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.024369 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4-catalog-content\") pod \"certified-operators-6kgrd\" (UID: \"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4\") " pod="openshift-marketplace/certified-operators-6kgrd" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.024405 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b00860ed-9085-40bb-9041-16eac6d88fb1-catalog-content\") pod \"community-operators-bfhcg\" (UID: \"b00860ed-9085-40bb-9041-16eac6d88fb1\") " pod="openshift-marketplace/community-operators-bfhcg" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.024442 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.024499 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t895b\" (UniqueName: \"kubernetes.io/projected/b00860ed-9085-40bb-9041-16eac6d88fb1-kube-api-access-t895b\") pod \"community-operators-bfhcg\" (UID: \"b00860ed-9085-40bb-9041-16eac6d88fb1\") " pod="openshift-marketplace/community-operators-bfhcg" Feb 24 00:09:38 crc kubenswrapper[4824]: E0224 00:09:38.028006 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:38.527970329 +0000 UTC m=+242.517594798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.125796 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.126700 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b00860ed-9085-40bb-9041-16eac6d88fb1-catalog-content\") pod \"community-operators-bfhcg\" (UID: \"b00860ed-9085-40bb-9041-16eac6d88fb1\") " pod="openshift-marketplace/community-operators-bfhcg" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.126789 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t895b\" (UniqueName: \"kubernetes.io/projected/b00860ed-9085-40bb-9041-16eac6d88fb1-kube-api-access-t895b\") pod \"community-operators-bfhcg\" (UID: \"b00860ed-9085-40bb-9041-16eac6d88fb1\") " pod="openshift-marketplace/community-operators-bfhcg" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.126888 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4-utilities\") pod \"certified-operators-6kgrd\" (UID: \"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4\") " pod="openshift-marketplace/certified-operators-6kgrd" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.126931 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b00860ed-9085-40bb-9041-16eac6d88fb1-utilities\") pod \"community-operators-bfhcg\" (UID: \"b00860ed-9085-40bb-9041-16eac6d88fb1\") " pod="openshift-marketplace/community-operators-bfhcg" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.126957 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c94kl\" (UniqueName: \"kubernetes.io/projected/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4-kube-api-access-c94kl\") pod \"certified-operators-6kgrd\" (UID: \"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4\") " pod="openshift-marketplace/certified-operators-6kgrd" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.126978 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4-catalog-content\") pod \"certified-operators-6kgrd\" (UID: \"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4\") " pod="openshift-marketplace/certified-operators-6kgrd" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.131085 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4-catalog-content\") pod \"certified-operators-6kgrd\" (UID: \"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4\") " pod="openshift-marketplace/certified-operators-6kgrd" Feb 24 00:09:38 crc kubenswrapper[4824]: E0224 00:09:38.131997 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:38.631973592 +0000 UTC m=+242.621598051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.136097 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b00860ed-9085-40bb-9041-16eac6d88fb1-utilities\") pod \"community-operators-bfhcg\" (UID: \"b00860ed-9085-40bb-9041-16eac6d88fb1\") " pod="openshift-marketplace/community-operators-bfhcg" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.136391 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4-utilities\") pod \"certified-operators-6kgrd\" (UID: \"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4\") " pod="openshift-marketplace/certified-operators-6kgrd" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.148274 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b00860ed-9085-40bb-9041-16eac6d88fb1-catalog-content\") pod \"community-operators-bfhcg\" (UID: \"b00860ed-9085-40bb-9041-16eac6d88fb1\") " pod="openshift-marketplace/community-operators-bfhcg" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.148900 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hhftg" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.202593 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c94kl\" (UniqueName: \"kubernetes.io/projected/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4-kube-api-access-c94kl\") pod \"certified-operators-6kgrd\" (UID: \"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4\") " pod="openshift-marketplace/certified-operators-6kgrd" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.215710 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t895b\" (UniqueName: \"kubernetes.io/projected/b00860ed-9085-40bb-9041-16eac6d88fb1-kube-api-access-t895b\") pod \"community-operators-bfhcg\" (UID: \"b00860ed-9085-40bb-9041-16eac6d88fb1\") " pod="openshift-marketplace/community-operators-bfhcg" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.234883 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:38 crc kubenswrapper[4824]: E0224 00:09:38.235272 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:38.735259916 +0000 UTC m=+242.724884375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.256233 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d95c9" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.271411 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfhcg" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.310629 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.310592179 podStartE2EDuration="3.310592179s" podCreationTimestamp="2026-02-24 00:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:38.271224258 +0000 UTC m=+242.260848727" watchObservedRunningTime="2026-02-24 00:09:38.310592179 +0000 UTC m=+242.300216658" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.336080 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:38 crc kubenswrapper[4824]: E0224 00:09:38.343995 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:38.843944142 +0000 UTC m=+242.833568611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.358162 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:38 crc kubenswrapper[4824]: E0224 00:09:38.358727 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:38.858711719 +0000 UTC m=+242.848336188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.369125 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kgrd" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.464075 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9ml5g" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.466510 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:38 crc kubenswrapper[4824]: E0224 00:09:38.467369 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:38.967347163 +0000 UTC m=+242.956971642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.559959 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jm7qk"] Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.560215 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" podUID="49f90ffa-baa8-4d9b-bd17-3a12f9c6f115" containerName="controller-manager" containerID="cri-o://5b26b0b907a714e0eb8fa1b65c28de4396d3dfbda124fc0f95f4d779730bf39c" gracePeriod=30 Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.569660 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:38 crc kubenswrapper[4824]: E0224 00:09:38.570278 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:39.070258428 +0000 UTC m=+243.059882887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.650737 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r"] Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.651074 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" podUID="1c4e1d48-7f8d-44b6-97b3-3ceccb35385b" containerName="route-controller-manager" containerID="cri-o://b81e27c4f382da70441b9aeabc639347240cb4f01fe8b81ba75c60688c944968" gracePeriod=30 Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.670755 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:38 crc kubenswrapper[4824]: E0224 00:09:38.671393 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:39.171377625 +0000 UTC m=+243.161002094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.687210 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.748716 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dmjz7"] Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.773886 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:38 crc kubenswrapper[4824]: E0224 00:09:38.774358 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:39.274342671 +0000 UTC m=+243.263967130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.877364 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:38 crc kubenswrapper[4824]: E0224 00:09:38.877794 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:09:39.377770199 +0000 UTC m=+243.367394668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.889581 4824 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.900234 4824 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-24T00:09:38.889611139Z","Handler":null,"Name":""} Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.912079 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:09:38 crc kubenswrapper[4824]: [-]has-synced failed: reason withheld Feb 24 00:09:38 crc kubenswrapper[4824]: [+]process-running ok Feb 24 00:09:38 crc kubenswrapper[4824]: healthz check failed Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.912183 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.969142 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"393dd5ac-a813-412e-ac2d-1d654d3e5c64","Type":"ContainerStarted","Data":"7c1c92983cde9f50ed89292c9ba489e6609f2698fffaab5523fc26a4c6ca4f45"} Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.981288 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:38 crc kubenswrapper[4824]: E0224 00:09:38.981707 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:09:39.48169048 +0000 UTC m=+243.471314949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ccm27" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.994266 4824 generic.go:334] "Generic (PLEG): container finished" podID="1c4e1d48-7f8d-44b6-97b3-3ceccb35385b" containerID="b81e27c4f382da70441b9aeabc639347240cb4f01fe8b81ba75c60688c944968" exitCode=0 Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.994407 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" event={"ID":"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b","Type":"ContainerDied","Data":"b81e27c4f382da70441b9aeabc639347240cb4f01fe8b81ba75c60688c944968"} Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.994684 4824 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 24 00:09:38 crc kubenswrapper[4824]: I0224 00:09:38.994735 4824 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.011659 4824 generic.go:334] "Generic (PLEG): container finished" podID="49f90ffa-baa8-4d9b-bd17-3a12f9c6f115" containerID="5b26b0b907a714e0eb8fa1b65c28de4396d3dfbda124fc0f95f4d779730bf39c" exitCode=0 Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.011804 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" event={"ID":"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115","Type":"ContainerDied","Data":"5b26b0b907a714e0eb8fa1b65c28de4396d3dfbda124fc0f95f4d779730bf39c"} Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.016627 4824 generic.go:334] "Generic (PLEG): container finished" podID="671ced26-8fac-4a17-a516-ab23ebcd6945" containerID="c1a2c2c662a6d26367b757b336ffe84f28d97409d74d7d15771e02012f07ca5b" exitCode=0 Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.017060 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"671ced26-8fac-4a17-a516-ab23ebcd6945","Type":"ContainerDied","Data":"c1a2c2c662a6d26367b757b336ffe84f28d97409d74d7d15771e02012f07ca5b"} Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.049062 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" event={"ID":"42d75b69-be96-43de-8687-444a81d8ebd5","Type":"ContainerStarted","Data":"5cc948d744be03102a61342e2f49ec08158b5ed47625f2fd4288c19c07ae5798"} Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.050794 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmjz7" event={"ID":"cc119514-5c95-4925-8a1a-3e6844a34e1e","Type":"ContainerStarted","Data":"42b124ba705dc951f666837537c3a14e76c91f608879722e252c98578703a4ac"} Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.089945 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.172260 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.198370 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.199013 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nzqwf"] Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.207290 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nzqwf"] Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.207436 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nzqwf" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.211552 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.211755 4824 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.211810 4824 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.313611 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ccm27\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.362669 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.417722 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hggqz\" (UniqueName: \"kubernetes.io/projected/b142d96b-87c3-444b-b135-fdddaa658234-kube-api-access-hggqz\") pod \"redhat-marketplace-nzqwf\" (UID: \"b142d96b-87c3-444b-b135-fdddaa658234\") " pod="openshift-marketplace/redhat-marketplace-nzqwf" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.418009 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b142d96b-87c3-444b-b135-fdddaa658234-catalog-content\") pod \"redhat-marketplace-nzqwf\" (UID: \"b142d96b-87c3-444b-b135-fdddaa658234\") " pod="openshift-marketplace/redhat-marketplace-nzqwf" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.418057 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b142d96b-87c3-444b-b135-fdddaa658234-utilities\") pod \"redhat-marketplace-nzqwf\" (UID: \"b142d96b-87c3-444b-b135-fdddaa658234\") " pod="openshift-marketplace/redhat-marketplace-nzqwf" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.429657 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hhftg"] Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.445904 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6kgrd"] Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.467372 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bfhcg"] Feb 24 00:09:39 crc kubenswrapper[4824]: W0224 00:09:39.474227 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb00860ed_9085_40bb_9041_16eac6d88fb1.slice/crio-9c018af5a403cd93073513dceb16d5cd69816c1726faff3f2cab188f2753d464 WatchSource:0}: Error finding container 9c018af5a403cd93073513dceb16d5cd69816c1726faff3f2cab188f2753d464: Status 404 returned error can't find the container with id 9c018af5a403cd93073513dceb16d5cd69816c1726faff3f2cab188f2753d464 Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.490670 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.518904 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-config\") pod \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.518969 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-serving-cert\") pod \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.519000 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-config\") pod \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\" (UID: \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\") " Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.519048 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-client-ca\") pod \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\" (UID: \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\") " Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.519116 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-serving-cert\") pod \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\" (UID: \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\") " Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.519203 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqjj7\" (UniqueName: \"kubernetes.io/projected/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-kube-api-access-sqjj7\") pod \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.519245 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw5js\" (UniqueName: \"kubernetes.io/projected/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-kube-api-access-nw5js\") pod \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\" (UID: \"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b\") " Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.519271 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-proxy-ca-bundles\") pod \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.519314 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-client-ca\") pod \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\" (UID: \"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115\") " Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.519564 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b142d96b-87c3-444b-b135-fdddaa658234-catalog-content\") pod \"redhat-marketplace-nzqwf\" (UID: \"b142d96b-87c3-444b-b135-fdddaa658234\") " pod="openshift-marketplace/redhat-marketplace-nzqwf" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.519597 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b142d96b-87c3-444b-b135-fdddaa658234-utilities\") pod \"redhat-marketplace-nzqwf\" (UID: \"b142d96b-87c3-444b-b135-fdddaa658234\") " pod="openshift-marketplace/redhat-marketplace-nzqwf" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.519689 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hggqz\" (UniqueName: \"kubernetes.io/projected/b142d96b-87c3-444b-b135-fdddaa658234-kube-api-access-hggqz\") pod \"redhat-marketplace-nzqwf\" (UID: \"b142d96b-87c3-444b-b135-fdddaa658234\") " pod="openshift-marketplace/redhat-marketplace-nzqwf" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.520934 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b142d96b-87c3-444b-b135-fdddaa658234-catalog-content\") pod \"redhat-marketplace-nzqwf\" (UID: \"b142d96b-87c3-444b-b135-fdddaa658234\") " pod="openshift-marketplace/redhat-marketplace-nzqwf" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.521349 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-config" (OuterVolumeSpecName: "config") pod "1c4e1d48-7f8d-44b6-97b3-3ceccb35385b" (UID: "1c4e1d48-7f8d-44b6-97b3-3ceccb35385b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.521561 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b142d96b-87c3-444b-b135-fdddaa658234-utilities\") pod \"redhat-marketplace-nzqwf\" (UID: \"b142d96b-87c3-444b-b135-fdddaa658234\") " pod="openshift-marketplace/redhat-marketplace-nzqwf" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.522546 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-config" (OuterVolumeSpecName: "config") pod "49f90ffa-baa8-4d9b-bd17-3a12f9c6f115" (UID: "49f90ffa-baa8-4d9b-bd17-3a12f9c6f115"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.523074 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-client-ca" (OuterVolumeSpecName: "client-ca") pod "49f90ffa-baa8-4d9b-bd17-3a12f9c6f115" (UID: "49f90ffa-baa8-4d9b-bd17-3a12f9c6f115"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.523421 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "49f90ffa-baa8-4d9b-bd17-3a12f9c6f115" (UID: "49f90ffa-baa8-4d9b-bd17-3a12f9c6f115"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.528012 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-client-ca" (OuterVolumeSpecName: "client-ca") pod "1c4e1d48-7f8d-44b6-97b3-3ceccb35385b" (UID: "1c4e1d48-7f8d-44b6-97b3-3ceccb35385b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.533901 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-kube-api-access-sqjj7" (OuterVolumeSpecName: "kube-api-access-sqjj7") pod "49f90ffa-baa8-4d9b-bd17-3a12f9c6f115" (UID: "49f90ffa-baa8-4d9b-bd17-3a12f9c6f115"). InnerVolumeSpecName "kube-api-access-sqjj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.534074 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1c4e1d48-7f8d-44b6-97b3-3ceccb35385b" (UID: "1c4e1d48-7f8d-44b6-97b3-3ceccb35385b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.538781 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.538977 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "49f90ffa-baa8-4d9b-bd17-3a12f9c6f115" (UID: "49f90ffa-baa8-4d9b-bd17-3a12f9c6f115"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.542075 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hggqz\" (UniqueName: \"kubernetes.io/projected/b142d96b-87c3-444b-b135-fdddaa658234-kube-api-access-hggqz\") pod \"redhat-marketplace-nzqwf\" (UID: \"b142d96b-87c3-444b-b135-fdddaa658234\") " pod="openshift-marketplace/redhat-marketplace-nzqwf" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.544934 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.546776 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-kube-api-access-nw5js" (OuterVolumeSpecName: "kube-api-access-nw5js") pod "1c4e1d48-7f8d-44b6-97b3-3ceccb35385b" (UID: "1c4e1d48-7f8d-44b6-97b3-3ceccb35385b"). InnerVolumeSpecName "kube-api-access-nw5js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.590709 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mfzkw"] Feb 24 00:09:39 crc kubenswrapper[4824]: E0224 00:09:39.591314 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f90ffa-baa8-4d9b-bd17-3a12f9c6f115" containerName="controller-manager" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.591343 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f90ffa-baa8-4d9b-bd17-3a12f9c6f115" containerName="controller-manager" Feb 24 00:09:39 crc kubenswrapper[4824]: E0224 00:09:39.591362 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c4e1d48-7f8d-44b6-97b3-3ceccb35385b" containerName="route-controller-manager" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.591370 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c4e1d48-7f8d-44b6-97b3-3ceccb35385b" containerName="route-controller-manager" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.591493 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c4e1d48-7f8d-44b6-97b3-3ceccb35385b" containerName="route-controller-manager" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.591505 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f90ffa-baa8-4d9b-bd17-3a12f9c6f115" containerName="controller-manager" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.596736 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfzkw" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.607683 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfzkw"] Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.630307 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08de7fe0-2d54-408b-8e09-3e1b9bcf931a-catalog-content\") pod \"redhat-marketplace-mfzkw\" (UID: \"08de7fe0-2d54-408b-8e09-3e1b9bcf931a\") " pod="openshift-marketplace/redhat-marketplace-mfzkw" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.630373 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08de7fe0-2d54-408b-8e09-3e1b9bcf931a-utilities\") pod \"redhat-marketplace-mfzkw\" (UID: \"08de7fe0-2d54-408b-8e09-3e1b9bcf931a\") " pod="openshift-marketplace/redhat-marketplace-mfzkw" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.630479 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k4qw\" (UniqueName: \"kubernetes.io/projected/08de7fe0-2d54-408b-8e09-3e1b9bcf931a-kube-api-access-8k4qw\") pod \"redhat-marketplace-mfzkw\" (UID: \"08de7fe0-2d54-408b-8e09-3e1b9bcf931a\") " pod="openshift-marketplace/redhat-marketplace-mfzkw" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.630712 4824 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.630725 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.630737 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqjj7\" (UniqueName: \"kubernetes.io/projected/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-kube-api-access-sqjj7\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.630749 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw5js\" (UniqueName: \"kubernetes.io/projected/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-kube-api-access-nw5js\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.630759 4824 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.630769 4824 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.630778 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.630787 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.630796 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.663963 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nzqwf" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.732147 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k4qw\" (UniqueName: \"kubernetes.io/projected/08de7fe0-2d54-408b-8e09-3e1b9bcf931a-kube-api-access-8k4qw\") pod \"redhat-marketplace-mfzkw\" (UID: \"08de7fe0-2d54-408b-8e09-3e1b9bcf931a\") " pod="openshift-marketplace/redhat-marketplace-mfzkw" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.732233 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08de7fe0-2d54-408b-8e09-3e1b9bcf931a-catalog-content\") pod \"redhat-marketplace-mfzkw\" (UID: \"08de7fe0-2d54-408b-8e09-3e1b9bcf931a\") " pod="openshift-marketplace/redhat-marketplace-mfzkw" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.732269 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08de7fe0-2d54-408b-8e09-3e1b9bcf931a-utilities\") pod \"redhat-marketplace-mfzkw\" (UID: \"08de7fe0-2d54-408b-8e09-3e1b9bcf931a\") " pod="openshift-marketplace/redhat-marketplace-mfzkw" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.741338 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08de7fe0-2d54-408b-8e09-3e1b9bcf931a-utilities\") pod \"redhat-marketplace-mfzkw\" (UID: \"08de7fe0-2d54-408b-8e09-3e1b9bcf931a\") " pod="openshift-marketplace/redhat-marketplace-mfzkw" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.742806 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08de7fe0-2d54-408b-8e09-3e1b9bcf931a-catalog-content\") pod \"redhat-marketplace-mfzkw\" (UID: \"08de7fe0-2d54-408b-8e09-3e1b9bcf931a\") " pod="openshift-marketplace/redhat-marketplace-mfzkw" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.762051 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k4qw\" (UniqueName: \"kubernetes.io/projected/08de7fe0-2d54-408b-8e09-3e1b9bcf931a-kube-api-access-8k4qw\") pod \"redhat-marketplace-mfzkw\" (UID: \"08de7fe0-2d54-408b-8e09-3e1b9bcf931a\") " pod="openshift-marketplace/redhat-marketplace-mfzkw" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.904408 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:09:39 crc kubenswrapper[4824]: [-]has-synced failed: reason withheld Feb 24 00:09:39 crc kubenswrapper[4824]: [+]process-running ok Feb 24 00:09:39 crc kubenswrapper[4824]: healthz check failed Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.904482 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.940686 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfzkw" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.945859 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b64d957-q2tx6"] Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.947029 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.951358 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb"] Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.952536 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb" Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.961917 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb"] Feb 24 00:09:39 crc kubenswrapper[4824]: I0224 00:09:39.966580 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b64d957-q2tx6"] Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.023882 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ccm27"] Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.048075 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/963d91ec-628d-4269-bfc9-2c6ffb4845b9-client-ca\") pod \"controller-manager-7b64d957-q2tx6\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.048119 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/963d91ec-628d-4269-bfc9-2c6ffb4845b9-proxy-ca-bundles\") pod \"controller-manager-7b64d957-q2tx6\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.048152 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e5605b6-71ca-4b14-9feb-c2036ed86648-config\") pod \"route-controller-manager-57cf5fdf8c-kzjzb\" (UID: \"2e5605b6-71ca-4b14-9feb-c2036ed86648\") " pod="openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.048213 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e5605b6-71ca-4b14-9feb-c2036ed86648-serving-cert\") pod \"route-controller-manager-57cf5fdf8c-kzjzb\" (UID: \"2e5605b6-71ca-4b14-9feb-c2036ed86648\") " pod="openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.048242 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w667c\" (UniqueName: \"kubernetes.io/projected/2e5605b6-71ca-4b14-9feb-c2036ed86648-kube-api-access-w667c\") pod \"route-controller-manager-57cf5fdf8c-kzjzb\" (UID: \"2e5605b6-71ca-4b14-9feb-c2036ed86648\") " pod="openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.048271 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6zpn\" (UniqueName: \"kubernetes.io/projected/963d91ec-628d-4269-bfc9-2c6ffb4845b9-kube-api-access-h6zpn\") pod \"controller-manager-7b64d957-q2tx6\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.048526 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/963d91ec-628d-4269-bfc9-2c6ffb4845b9-serving-cert\") pod \"controller-manager-7b64d957-q2tx6\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.048626 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e5605b6-71ca-4b14-9feb-c2036ed86648-client-ca\") pod \"route-controller-manager-57cf5fdf8c-kzjzb\" (UID: \"2e5605b6-71ca-4b14-9feb-c2036ed86648\") " pod="openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.048695 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/963d91ec-628d-4269-bfc9-2c6ffb4845b9-config\") pod \"controller-manager-7b64d957-q2tx6\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:40 crc kubenswrapper[4824]: W0224 00:09:40.051756 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9016587d_3cd5_46d7_bd50_586cd32933f7.slice/crio-7258e3c460d9eb30e7b444c92e1cb2427c103a3e9b4014b73c4a4fe6cecde128 WatchSource:0}: Error finding container 7258e3c460d9eb30e7b444c92e1cb2427c103a3e9b4014b73c4a4fe6cecde128: Status 404 returned error can't find the container with id 7258e3c460d9eb30e7b444c92e1cb2427c103a3e9b4014b73c4a4fe6cecde128 Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.058920 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" event={"ID":"9016587d-3cd5-46d7-bd50-586cd32933f7","Type":"ContainerStarted","Data":"7258e3c460d9eb30e7b444c92e1cb2427c103a3e9b4014b73c4a4fe6cecde128"} Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.068275 4824 generic.go:334] "Generic (PLEG): container finished" podID="b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4" containerID="8bc4c4e74760164a686d72949257ade00d0c3cfa06f89e62b82604ab57ead475" exitCode=0 Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.068360 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kgrd" event={"ID":"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4","Type":"ContainerDied","Data":"8bc4c4e74760164a686d72949257ade00d0c3cfa06f89e62b82604ab57ead475"} Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.068390 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kgrd" event={"ID":"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4","Type":"ContainerStarted","Data":"a33e62fe6f2549eb1208d3cf356835348bf6325f74507046a34d8f566aaa9f3c"} Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.075088 4824 generic.go:334] "Generic (PLEG): container finished" podID="3e306ddf-071d-47f2-b9b1-bf772963438e" containerID="165f557a643df29a5f3055b0f6055d2350a6f07b3c59175faba79784672bcb83" exitCode=0 Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.075150 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhftg" event={"ID":"3e306ddf-071d-47f2-b9b1-bf772963438e","Type":"ContainerDied","Data":"165f557a643df29a5f3055b0f6055d2350a6f07b3c59175faba79784672bcb83"} Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.075174 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhftg" event={"ID":"3e306ddf-071d-47f2-b9b1-bf772963438e","Type":"ContainerStarted","Data":"24c650baf1648fdbc140def26b06acbc896c72aa2095332a4a2cc286bdf3cc0c"} Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.076566 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"393dd5ac-a813-412e-ac2d-1d654d3e5c64","Type":"ContainerStarted","Data":"49c784ba799f271351dfe9e2df364e535f8e5fabd915dec0cc780fab30b4c4e0"} Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.083829 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" event={"ID":"1c4e1d48-7f8d-44b6-97b3-3ceccb35385b","Type":"ContainerDied","Data":"d168e7b48bb45e2f3eeaabaaae34172927392199794b2e25a747dcf303c33d18"} Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.083878 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.083892 4824 scope.go:117] "RemoveContainer" containerID="b81e27c4f382da70441b9aeabc639347240cb4f01fe8b81ba75c60688c944968" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.088655 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" event={"ID":"49f90ffa-baa8-4d9b-bd17-3a12f9c6f115","Type":"ContainerDied","Data":"3b88c71c7f646381790daa9790f722b3637990f46735a62cbc3312b308a3ab9b"} Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.088814 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jm7qk" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.115242 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb"] Feb 24 00:09:40 crc kubenswrapper[4824]: E0224 00:09:40.116011 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-w667c serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb" podUID="2e5605b6-71ca-4b14-9feb-c2036ed86648" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.117349 4824 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.119369 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b64d957-q2tx6"] Feb 24 00:09:40 crc kubenswrapper[4824]: E0224 00:09:40.124063 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-h6zpn proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" podUID="963d91ec-628d-4269-bfc9-2c6ffb4845b9" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.151399 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/963d91ec-628d-4269-bfc9-2c6ffb4845b9-serving-cert\") pod \"controller-manager-7b64d957-q2tx6\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.151442 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e5605b6-71ca-4b14-9feb-c2036ed86648-client-ca\") pod \"route-controller-manager-57cf5fdf8c-kzjzb\" (UID: \"2e5605b6-71ca-4b14-9feb-c2036ed86648\") " pod="openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.151473 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/963d91ec-628d-4269-bfc9-2c6ffb4845b9-config\") pod \"controller-manager-7b64d957-q2tx6\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.151510 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/963d91ec-628d-4269-bfc9-2c6ffb4845b9-client-ca\") pod \"controller-manager-7b64d957-q2tx6\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.151541 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/963d91ec-628d-4269-bfc9-2c6ffb4845b9-proxy-ca-bundles\") pod \"controller-manager-7b64d957-q2tx6\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.151568 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e5605b6-71ca-4b14-9feb-c2036ed86648-config\") pod \"route-controller-manager-57cf5fdf8c-kzjzb\" (UID: \"2e5605b6-71ca-4b14-9feb-c2036ed86648\") " pod="openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.151625 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e5605b6-71ca-4b14-9feb-c2036ed86648-serving-cert\") pod \"route-controller-manager-57cf5fdf8c-kzjzb\" (UID: \"2e5605b6-71ca-4b14-9feb-c2036ed86648\") " pod="openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.151652 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w667c\" (UniqueName: \"kubernetes.io/projected/2e5605b6-71ca-4b14-9feb-c2036ed86648-kube-api-access-w667c\") pod \"route-controller-manager-57cf5fdf8c-kzjzb\" (UID: \"2e5605b6-71ca-4b14-9feb-c2036ed86648\") " pod="openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.151676 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6zpn\" (UniqueName: \"kubernetes.io/projected/963d91ec-628d-4269-bfc9-2c6ffb4845b9-kube-api-access-h6zpn\") pod \"controller-manager-7b64d957-q2tx6\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.154733 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/963d91ec-628d-4269-bfc9-2c6ffb4845b9-proxy-ca-bundles\") pod \"controller-manager-7b64d957-q2tx6\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.155619 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e5605b6-71ca-4b14-9feb-c2036ed86648-config\") pod \"route-controller-manager-57cf5fdf8c-kzjzb\" (UID: \"2e5605b6-71ca-4b14-9feb-c2036ed86648\") " pod="openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.155652 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e5605b6-71ca-4b14-9feb-c2036ed86648-client-ca\") pod \"route-controller-manager-57cf5fdf8c-kzjzb\" (UID: \"2e5605b6-71ca-4b14-9feb-c2036ed86648\") " pod="openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.156704 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/963d91ec-628d-4269-bfc9-2c6ffb4845b9-client-ca\") pod \"controller-manager-7b64d957-q2tx6\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.156790 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/963d91ec-628d-4269-bfc9-2c6ffb4845b9-config\") pod \"controller-manager-7b64d957-q2tx6\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.158109 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" event={"ID":"42d75b69-be96-43de-8687-444a81d8ebd5","Type":"ContainerStarted","Data":"3f6ee22c9b5dca052da8f48e0e4d619b636f0d5e852c1428d9ec153607ed60b3"} Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.170446 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nzqwf"] Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.170999 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e5605b6-71ca-4b14-9feb-c2036ed86648-serving-cert\") pod \"route-controller-manager-57cf5fdf8c-kzjzb\" (UID: \"2e5605b6-71ca-4b14-9feb-c2036ed86648\") " pod="openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.184731 4824 generic.go:334] "Generic (PLEG): container finished" podID="cc119514-5c95-4925-8a1a-3e6844a34e1e" containerID="0a0ed56ec1c248b7e9a29be5b115fbce61e3aaf9d9d45ae0ef4e45bef08de9cc" exitCode=0 Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.185079 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmjz7" event={"ID":"cc119514-5c95-4925-8a1a-3e6844a34e1e","Type":"ContainerDied","Data":"0a0ed56ec1c248b7e9a29be5b115fbce61e3aaf9d9d45ae0ef4e45bef08de9cc"} Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.189987 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w667c\" (UniqueName: \"kubernetes.io/projected/2e5605b6-71ca-4b14-9feb-c2036ed86648-kube-api-access-w667c\") pod \"route-controller-manager-57cf5fdf8c-kzjzb\" (UID: \"2e5605b6-71ca-4b14-9feb-c2036ed86648\") " pod="openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.214879 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/963d91ec-628d-4269-bfc9-2c6ffb4845b9-serving-cert\") pod \"controller-manager-7b64d957-q2tx6\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.215612 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6zpn\" (UniqueName: \"kubernetes.io/projected/963d91ec-628d-4269-bfc9-2c6ffb4845b9-kube-api-access-h6zpn\") pod \"controller-manager-7b64d957-q2tx6\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.215857 4824 scope.go:117] "RemoveContainer" containerID="5b26b0b907a714e0eb8fa1b65c28de4396d3dfbda124fc0f95f4d779730bf39c" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.216221 4824 generic.go:334] "Generic (PLEG): container finished" podID="b00860ed-9085-40bb-9041-16eac6d88fb1" containerID="b1c2dccdb4d30a9f6f73707633f7236dc1482d098520afb3ff0820f1bf1efcba" exitCode=0 Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.225718 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfhcg" event={"ID":"b00860ed-9085-40bb-9041-16eac6d88fb1","Type":"ContainerDied","Data":"b1c2dccdb4d30a9f6f73707633f7236dc1482d098520afb3ff0820f1bf1efcba"} Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.225793 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfhcg" event={"ID":"b00860ed-9085-40bb-9041-16eac6d88fb1","Type":"ContainerStarted","Data":"9c018af5a403cd93073513dceb16d5cd69816c1726faff3f2cab188f2753d464"} Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.246916 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.246892386 podStartE2EDuration="3.246892386s" podCreationTimestamp="2026-02-24 00:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:40.224287274 +0000 UTC m=+244.213911763" watchObservedRunningTime="2026-02-24 00:09:40.246892386 +0000 UTC m=+244.236516855" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.284463 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-dq9gz" podStartSLOduration=16.284442229 podStartE2EDuration="16.284442229s" podCreationTimestamp="2026-02-24 00:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:40.282533369 +0000 UTC m=+244.272157838" watchObservedRunningTime="2026-02-24 00:09:40.284442229 +0000 UTC m=+244.274066698" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.423720 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zxplg"] Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.425900 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxplg" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.437232 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.442883 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jm7qk"] Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.459477 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jm7qk"] Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.465960 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zxplg"] Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.466209 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a78c7d6-6ec6-4857-af87-25c5c8cf961d-utilities\") pod \"redhat-operators-zxplg\" (UID: \"7a78c7d6-6ec6-4857-af87-25c5c8cf961d\") " pod="openshift-marketplace/redhat-operators-zxplg" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.466297 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc6ds\" (UniqueName: \"kubernetes.io/projected/7a78c7d6-6ec6-4857-af87-25c5c8cf961d-kube-api-access-zc6ds\") pod \"redhat-operators-zxplg\" (UID: \"7a78c7d6-6ec6-4857-af87-25c5c8cf961d\") " pod="openshift-marketplace/redhat-operators-zxplg" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.466339 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a78c7d6-6ec6-4857-af87-25c5c8cf961d-catalog-content\") pod \"redhat-operators-zxplg\" (UID: \"7a78c7d6-6ec6-4857-af87-25c5c8cf961d\") " pod="openshift-marketplace/redhat-operators-zxplg" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.470203 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r"] Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.474352 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9k27r"] Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.568355 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc6ds\" (UniqueName: \"kubernetes.io/projected/7a78c7d6-6ec6-4857-af87-25c5c8cf961d-kube-api-access-zc6ds\") pod \"redhat-operators-zxplg\" (UID: \"7a78c7d6-6ec6-4857-af87-25c5c8cf961d\") " pod="openshift-marketplace/redhat-operators-zxplg" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.568427 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a78c7d6-6ec6-4857-af87-25c5c8cf961d-catalog-content\") pod \"redhat-operators-zxplg\" (UID: \"7a78c7d6-6ec6-4857-af87-25c5c8cf961d\") " pod="openshift-marketplace/redhat-operators-zxplg" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.569129 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a78c7d6-6ec6-4857-af87-25c5c8cf961d-utilities\") pod \"redhat-operators-zxplg\" (UID: \"7a78c7d6-6ec6-4857-af87-25c5c8cf961d\") " pod="openshift-marketplace/redhat-operators-zxplg" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.570983 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a78c7d6-6ec6-4857-af87-25c5c8cf961d-utilities\") pod \"redhat-operators-zxplg\" (UID: \"7a78c7d6-6ec6-4857-af87-25c5c8cf961d\") " pod="openshift-marketplace/redhat-operators-zxplg" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.571769 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a78c7d6-6ec6-4857-af87-25c5c8cf961d-catalog-content\") pod \"redhat-operators-zxplg\" (UID: \"7a78c7d6-6ec6-4857-af87-25c5c8cf961d\") " pod="openshift-marketplace/redhat-operators-zxplg" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.573504 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfzkw"] Feb 24 00:09:40 crc kubenswrapper[4824]: W0224 00:09:40.588444 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08de7fe0_2d54_408b_8e09_3e1b9bcf931a.slice/crio-eef69387238650e8470f3abae2d3e6234452b8da7d8847435def291fdad9a1d8 WatchSource:0}: Error finding container eef69387238650e8470f3abae2d3e6234452b8da7d8847435def291fdad9a1d8: Status 404 returned error can't find the container with id eef69387238650e8470f3abae2d3e6234452b8da7d8847435def291fdad9a1d8 Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.594887 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc6ds\" (UniqueName: \"kubernetes.io/projected/7a78c7d6-6ec6-4857-af87-25c5c8cf961d-kube-api-access-zc6ds\") pod \"redhat-operators-zxplg\" (UID: \"7a78c7d6-6ec6-4857-af87-25c5c8cf961d\") " pod="openshift-marketplace/redhat-operators-zxplg" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.688777 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.718108 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c4e1d48-7f8d-44b6-97b3-3ceccb35385b" path="/var/lib/kubelet/pods/1c4e1d48-7f8d-44b6-97b3-3ceccb35385b/volumes" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.718687 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49f90ffa-baa8-4d9b-bd17-3a12f9c6f115" path="/var/lib/kubelet/pods/49f90ffa-baa8-4d9b-bd17-3a12f9c6f115/volumes" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.719547 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.782509 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gl27t"] Feb 24 00:09:40 crc kubenswrapper[4824]: E0224 00:09:40.782954 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="671ced26-8fac-4a17-a516-ab23ebcd6945" containerName="pruner" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.782973 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="671ced26-8fac-4a17-a516-ab23ebcd6945" containerName="pruner" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.783167 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="671ced26-8fac-4a17-a516-ab23ebcd6945" containerName="pruner" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.784557 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gl27t" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.788305 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gl27t"] Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.802635 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxplg" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.875427 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/671ced26-8fac-4a17-a516-ab23ebcd6945-kube-api-access\") pod \"671ced26-8fac-4a17-a516-ab23ebcd6945\" (UID: \"671ced26-8fac-4a17-a516-ab23ebcd6945\") " Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.875572 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/671ced26-8fac-4a17-a516-ab23ebcd6945-kubelet-dir\") pod \"671ced26-8fac-4a17-a516-ab23ebcd6945\" (UID: \"671ced26-8fac-4a17-a516-ab23ebcd6945\") " Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.876037 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/671ced26-8fac-4a17-a516-ab23ebcd6945-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "671ced26-8fac-4a17-a516-ab23ebcd6945" (UID: "671ced26-8fac-4a17-a516-ab23ebcd6945"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.878599 4824 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/671ced26-8fac-4a17-a516-ab23ebcd6945-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.890924 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/671ced26-8fac-4a17-a516-ab23ebcd6945-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "671ced26-8fac-4a17-a516-ab23ebcd6945" (UID: "671ced26-8fac-4a17-a516-ab23ebcd6945"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.905372 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:09:40 crc kubenswrapper[4824]: [-]has-synced failed: reason withheld Feb 24 00:09:40 crc kubenswrapper[4824]: [+]process-running ok Feb 24 00:09:40 crc kubenswrapper[4824]: healthz check failed Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.905483 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.982304 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fhvh\" (UniqueName: \"kubernetes.io/projected/2da73289-3f96-4828-a106-46c3b0469e7d-kube-api-access-7fhvh\") pod \"redhat-operators-gl27t\" (UID: \"2da73289-3f96-4828-a106-46c3b0469e7d\") " pod="openshift-marketplace/redhat-operators-gl27t" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.983067 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2da73289-3f96-4828-a106-46c3b0469e7d-catalog-content\") pod \"redhat-operators-gl27t\" (UID: \"2da73289-3f96-4828-a106-46c3b0469e7d\") " pod="openshift-marketplace/redhat-operators-gl27t" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.983155 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2da73289-3f96-4828-a106-46c3b0469e7d-utilities\") pod \"redhat-operators-gl27t\" (UID: \"2da73289-3f96-4828-a106-46c3b0469e7d\") " pod="openshift-marketplace/redhat-operators-gl27t" Feb 24 00:09:40 crc kubenswrapper[4824]: I0224 00:09:40.983293 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/671ced26-8fac-4a17-a516-ab23ebcd6945-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.086384 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2da73289-3f96-4828-a106-46c3b0469e7d-utilities\") pod \"redhat-operators-gl27t\" (UID: \"2da73289-3f96-4828-a106-46c3b0469e7d\") " pod="openshift-marketplace/redhat-operators-gl27t" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.086548 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fhvh\" (UniqueName: \"kubernetes.io/projected/2da73289-3f96-4828-a106-46c3b0469e7d-kube-api-access-7fhvh\") pod \"redhat-operators-gl27t\" (UID: \"2da73289-3f96-4828-a106-46c3b0469e7d\") " pod="openshift-marketplace/redhat-operators-gl27t" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.086640 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2da73289-3f96-4828-a106-46c3b0469e7d-catalog-content\") pod \"redhat-operators-gl27t\" (UID: \"2da73289-3f96-4828-a106-46c3b0469e7d\") " pod="openshift-marketplace/redhat-operators-gl27t" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.087365 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2da73289-3f96-4828-a106-46c3b0469e7d-catalog-content\") pod \"redhat-operators-gl27t\" (UID: \"2da73289-3f96-4828-a106-46c3b0469e7d\") " pod="openshift-marketplace/redhat-operators-gl27t" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.087365 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2da73289-3f96-4828-a106-46c3b0469e7d-utilities\") pod \"redhat-operators-gl27t\" (UID: \"2da73289-3f96-4828-a106-46c3b0469e7d\") " pod="openshift-marketplace/redhat-operators-gl27t" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.134218 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fhvh\" (UniqueName: \"kubernetes.io/projected/2da73289-3f96-4828-a106-46c3b0469e7d-kube-api-access-7fhvh\") pod \"redhat-operators-gl27t\" (UID: \"2da73289-3f96-4828-a106-46c3b0469e7d\") " pod="openshift-marketplace/redhat-operators-gl27t" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.200136 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zxplg"] Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.257221 4824 generic.go:334] "Generic (PLEG): container finished" podID="b142d96b-87c3-444b-b135-fdddaa658234" containerID="5af3da4115b49b00d3bb13283e7fccd617f9a8fbd1e5c6782e319a1b0a15e513" exitCode=0 Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.257395 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzqwf" event={"ID":"b142d96b-87c3-444b-b135-fdddaa658234","Type":"ContainerDied","Data":"5af3da4115b49b00d3bb13283e7fccd617f9a8fbd1e5c6782e319a1b0a15e513"} Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.257442 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzqwf" event={"ID":"b142d96b-87c3-444b-b135-fdddaa658234","Type":"ContainerStarted","Data":"6087d7cb108c4772f5476645e00887a465effb8e262d89a746313bbbb9fb34f8"} Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.282098 4824 generic.go:334] "Generic (PLEG): container finished" podID="239fc97c-cb5a-4fa1-965e-7b64c90268ce" containerID="594ea7953af708dc6eec520d0cd46b08f1c6126425d4ad263d064dfe050100f2" exitCode=0 Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.282312 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq" event={"ID":"239fc97c-cb5a-4fa1-965e-7b64c90268ce","Type":"ContainerDied","Data":"594ea7953af708dc6eec520d0cd46b08f1c6126425d4ad263d064dfe050100f2"} Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.292858 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.292903 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"671ced26-8fac-4a17-a516-ab23ebcd6945","Type":"ContainerDied","Data":"dc2fc5cc3054549841a2254f5e01ca95caee472926cb159e358b999488865ad7"} Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.292987 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc2fc5cc3054549841a2254f5e01ca95caee472926cb159e358b999488865ad7" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.299323 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" event={"ID":"9016587d-3cd5-46d7-bd50-586cd32933f7","Type":"ContainerStarted","Data":"2298440fb6247a85651f7297a9c931c5be1fbe577d08f83bab4ec4ae10e9e795"} Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.299791 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.302793 4824 generic.go:334] "Generic (PLEG): container finished" podID="393dd5ac-a813-412e-ac2d-1d654d3e5c64" containerID="49c784ba799f271351dfe9e2df364e535f8e5fabd915dec0cc780fab30b4c4e0" exitCode=0 Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.302876 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"393dd5ac-a813-412e-ac2d-1d654d3e5c64","Type":"ContainerDied","Data":"49c784ba799f271351dfe9e2df364e535f8e5fabd915dec0cc780fab30b4c4e0"} Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.306819 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfzkw" event={"ID":"08de7fe0-2d54-408b-8e09-3e1b9bcf931a","Type":"ContainerDied","Data":"095e0801ed7074f62f4831cd0263857d8e7741a2b32dd7eae5398cdc3839e533"} Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.306478 4824 generic.go:334] "Generic (PLEG): container finished" podID="08de7fe0-2d54-408b-8e09-3e1b9bcf931a" containerID="095e0801ed7074f62f4831cd0263857d8e7741a2b32dd7eae5398cdc3839e533" exitCode=0 Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.307356 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.307420 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfzkw" event={"ID":"08de7fe0-2d54-408b-8e09-3e1b9bcf931a","Type":"ContainerStarted","Data":"eef69387238650e8470f3abae2d3e6234452b8da7d8847435def291fdad9a1d8"} Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.307650 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.335780 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.337212 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.382891 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" podStartSLOduration=171.382868509 podStartE2EDuration="2m51.382868509s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:41.379955563 +0000 UTC m=+245.369580052" watchObservedRunningTime="2026-02-24 00:09:41.382868509 +0000 UTC m=+245.372492988" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.406296 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/963d91ec-628d-4269-bfc9-2c6ffb4845b9-proxy-ca-bundles\") pod \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.406358 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/963d91ec-628d-4269-bfc9-2c6ffb4845b9-config\") pod \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.406395 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e5605b6-71ca-4b14-9feb-c2036ed86648-client-ca\") pod \"2e5605b6-71ca-4b14-9feb-c2036ed86648\" (UID: \"2e5605b6-71ca-4b14-9feb-c2036ed86648\") " Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.406425 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/963d91ec-628d-4269-bfc9-2c6ffb4845b9-client-ca\") pod \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.406485 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w667c\" (UniqueName: \"kubernetes.io/projected/2e5605b6-71ca-4b14-9feb-c2036ed86648-kube-api-access-w667c\") pod \"2e5605b6-71ca-4b14-9feb-c2036ed86648\" (UID: \"2e5605b6-71ca-4b14-9feb-c2036ed86648\") " Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.406531 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e5605b6-71ca-4b14-9feb-c2036ed86648-config\") pod \"2e5605b6-71ca-4b14-9feb-c2036ed86648\" (UID: \"2e5605b6-71ca-4b14-9feb-c2036ed86648\") " Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.407475 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e5605b6-71ca-4b14-9feb-c2036ed86648-client-ca" (OuterVolumeSpecName: "client-ca") pod "2e5605b6-71ca-4b14-9feb-c2036ed86648" (UID: "2e5605b6-71ca-4b14-9feb-c2036ed86648"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.407696 4824 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e5605b6-71ca-4b14-9feb-c2036ed86648-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.409984 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/963d91ec-628d-4269-bfc9-2c6ffb4845b9-client-ca" (OuterVolumeSpecName: "client-ca") pod "963d91ec-628d-4269-bfc9-2c6ffb4845b9" (UID: "963d91ec-628d-4269-bfc9-2c6ffb4845b9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.413835 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gl27t" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.414008 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/963d91ec-628d-4269-bfc9-2c6ffb4845b9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "963d91ec-628d-4269-bfc9-2c6ffb4845b9" (UID: "963d91ec-628d-4269-bfc9-2c6ffb4845b9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.414267 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/963d91ec-628d-4269-bfc9-2c6ffb4845b9-config" (OuterVolumeSpecName: "config") pod "963d91ec-628d-4269-bfc9-2c6ffb4845b9" (UID: "963d91ec-628d-4269-bfc9-2c6ffb4845b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.424088 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e5605b6-71ca-4b14-9feb-c2036ed86648-config" (OuterVolumeSpecName: "config") pod "2e5605b6-71ca-4b14-9feb-c2036ed86648" (UID: "2e5605b6-71ca-4b14-9feb-c2036ed86648"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.443060 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e5605b6-71ca-4b14-9feb-c2036ed86648-kube-api-access-w667c" (OuterVolumeSpecName: "kube-api-access-w667c") pod "2e5605b6-71ca-4b14-9feb-c2036ed86648" (UID: "2e5605b6-71ca-4b14-9feb-c2036ed86648"). InnerVolumeSpecName "kube-api-access-w667c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.508445 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e5605b6-71ca-4b14-9feb-c2036ed86648-serving-cert\") pod \"2e5605b6-71ca-4b14-9feb-c2036ed86648\" (UID: \"2e5605b6-71ca-4b14-9feb-c2036ed86648\") " Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.509093 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/963d91ec-628d-4269-bfc9-2c6ffb4845b9-serving-cert\") pod \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.509176 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6zpn\" (UniqueName: \"kubernetes.io/projected/963d91ec-628d-4269-bfc9-2c6ffb4845b9-kube-api-access-h6zpn\") pod \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\" (UID: \"963d91ec-628d-4269-bfc9-2c6ffb4845b9\") " Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.509390 4824 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/963d91ec-628d-4269-bfc9-2c6ffb4845b9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.509403 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/963d91ec-628d-4269-bfc9-2c6ffb4845b9-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.509413 4824 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/963d91ec-628d-4269-bfc9-2c6ffb4845b9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.509422 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w667c\" (UniqueName: \"kubernetes.io/projected/2e5605b6-71ca-4b14-9feb-c2036ed86648-kube-api-access-w667c\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.509433 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e5605b6-71ca-4b14-9feb-c2036ed86648-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.520083 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e5605b6-71ca-4b14-9feb-c2036ed86648-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2e5605b6-71ca-4b14-9feb-c2036ed86648" (UID: "2e5605b6-71ca-4b14-9feb-c2036ed86648"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.525436 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/963d91ec-628d-4269-bfc9-2c6ffb4845b9-kube-api-access-h6zpn" (OuterVolumeSpecName: "kube-api-access-h6zpn") pod "963d91ec-628d-4269-bfc9-2c6ffb4845b9" (UID: "963d91ec-628d-4269-bfc9-2c6ffb4845b9"). InnerVolumeSpecName "kube-api-access-h6zpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.533286 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/963d91ec-628d-4269-bfc9-2c6ffb4845b9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "963d91ec-628d-4269-bfc9-2c6ffb4845b9" (UID: "963d91ec-628d-4269-bfc9-2c6ffb4845b9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.587023 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jkghx" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.610711 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e5605b6-71ca-4b14-9feb-c2036ed86648-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.610738 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/963d91ec-628d-4269-bfc9-2c6ffb4845b9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.610749 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6zpn\" (UniqueName: \"kubernetes.io/projected/963d91ec-628d-4269-bfc9-2c6ffb4845b9-kube-api-access-h6zpn\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.812643 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gl27t"] Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.902764 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:09:41 crc kubenswrapper[4824]: [-]has-synced failed: reason withheld Feb 24 00:09:41 crc kubenswrapper[4824]: [+]process-running ok Feb 24 00:09:41 crc kubenswrapper[4824]: healthz check failed Feb 24 00:09:41 crc kubenswrapper[4824]: I0224 00:09:41.903103 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.044386 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.049352 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xfl22" Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.384407 4824 generic.go:334] "Generic (PLEG): container finished" podID="2da73289-3f96-4828-a106-46c3b0469e7d" containerID="92570d872625fe189d1225ae3cfcceb0efc1931cef5c4ee603139bb405c9eff3" exitCode=0 Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.384850 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl27t" event={"ID":"2da73289-3f96-4828-a106-46c3b0469e7d","Type":"ContainerDied","Data":"92570d872625fe189d1225ae3cfcceb0efc1931cef5c4ee603139bb405c9eff3"} Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.384921 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl27t" event={"ID":"2da73289-3f96-4828-a106-46c3b0469e7d","Type":"ContainerStarted","Data":"2503a134b22274bc6e70e9fb4c998a82c8a291a8ce5041c5a448cbf0b7c362a7"} Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.387680 4824 generic.go:334] "Generic (PLEG): container finished" podID="7a78c7d6-6ec6-4857-af87-25c5c8cf961d" containerID="05173dd075227354e5c8172cf583a8c34fd894215338d07f6c1a9644348f85b0" exitCode=0 Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.387985 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxplg" event={"ID":"7a78c7d6-6ec6-4857-af87-25c5c8cf961d","Type":"ContainerDied","Data":"05173dd075227354e5c8172cf583a8c34fd894215338d07f6c1a9644348f85b0"} Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.388015 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b64d957-q2tx6" Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.388049 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxplg" event={"ID":"7a78c7d6-6ec6-4857-af87-25c5c8cf961d","Type":"ContainerStarted","Data":"6ef8798cf5f3aadb98a5ae1d2d3bf34bb35cf168ac8076ee6ba9bc741a06b98b"} Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.388457 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb" Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.431655 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5n768" Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.467810 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b64d957-q2tx6"] Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.470047 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7b64d957-q2tx6"] Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.514240 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb"] Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.535338 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57cf5fdf8c-kzjzb"] Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.749440 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e5605b6-71ca-4b14-9feb-c2036ed86648" path="/var/lib/kubelet/pods/2e5605b6-71ca-4b14-9feb-c2036ed86648/volumes" Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.750123 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="963d91ec-628d-4269-bfc9-2c6ffb4845b9" path="/var/lib/kubelet/pods/963d91ec-628d-4269-bfc9-2c6ffb4845b9/volumes" Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.907713 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:09:42 crc kubenswrapper[4824]: [-]has-synced failed: reason withheld Feb 24 00:09:42 crc kubenswrapper[4824]: [+]process-running ok Feb 24 00:09:42 crc kubenswrapper[4824]: healthz check failed Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.907782 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.953842 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq" Feb 24 00:09:42 crc kubenswrapper[4824]: I0224 00:09:42.971961 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.056072 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/393dd5ac-a813-412e-ac2d-1d654d3e5c64-kubelet-dir\") pod \"393dd5ac-a813-412e-ac2d-1d654d3e5c64\" (UID: \"393dd5ac-a813-412e-ac2d-1d654d3e5c64\") " Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.056153 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9glzv\" (UniqueName: \"kubernetes.io/projected/239fc97c-cb5a-4fa1-965e-7b64c90268ce-kube-api-access-9glzv\") pod \"239fc97c-cb5a-4fa1-965e-7b64c90268ce\" (UID: \"239fc97c-cb5a-4fa1-965e-7b64c90268ce\") " Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.056212 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/393dd5ac-a813-412e-ac2d-1d654d3e5c64-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "393dd5ac-a813-412e-ac2d-1d654d3e5c64" (UID: "393dd5ac-a813-412e-ac2d-1d654d3e5c64"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.056261 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/239fc97c-cb5a-4fa1-965e-7b64c90268ce-config-volume\") pod \"239fc97c-cb5a-4fa1-965e-7b64c90268ce\" (UID: \"239fc97c-cb5a-4fa1-965e-7b64c90268ce\") " Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.056285 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/393dd5ac-a813-412e-ac2d-1d654d3e5c64-kube-api-access\") pod \"393dd5ac-a813-412e-ac2d-1d654d3e5c64\" (UID: \"393dd5ac-a813-412e-ac2d-1d654d3e5c64\") " Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.056327 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/239fc97c-cb5a-4fa1-965e-7b64c90268ce-secret-volume\") pod \"239fc97c-cb5a-4fa1-965e-7b64c90268ce\" (UID: \"239fc97c-cb5a-4fa1-965e-7b64c90268ce\") " Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.056606 4824 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/393dd5ac-a813-412e-ac2d-1d654d3e5c64-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.058553 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/239fc97c-cb5a-4fa1-965e-7b64c90268ce-config-volume" (OuterVolumeSpecName: "config-volume") pod "239fc97c-cb5a-4fa1-965e-7b64c90268ce" (UID: "239fc97c-cb5a-4fa1-965e-7b64c90268ce"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.080753 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/393dd5ac-a813-412e-ac2d-1d654d3e5c64-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "393dd5ac-a813-412e-ac2d-1d654d3e5c64" (UID: "393dd5ac-a813-412e-ac2d-1d654d3e5c64"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.083126 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/239fc97c-cb5a-4fa1-965e-7b64c90268ce-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "239fc97c-cb5a-4fa1-965e-7b64c90268ce" (UID: "239fc97c-cb5a-4fa1-965e-7b64c90268ce"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.083412 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/239fc97c-cb5a-4fa1-965e-7b64c90268ce-kube-api-access-9glzv" (OuterVolumeSpecName: "kube-api-access-9glzv") pod "239fc97c-cb5a-4fa1-965e-7b64c90268ce" (UID: "239fc97c-cb5a-4fa1-965e-7b64c90268ce"). InnerVolumeSpecName "kube-api-access-9glzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.157959 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9glzv\" (UniqueName: \"kubernetes.io/projected/239fc97c-cb5a-4fa1-965e-7b64c90268ce-kube-api-access-9glzv\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.158012 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/393dd5ac-a813-412e-ac2d-1d654d3e5c64-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.158027 4824 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/239fc97c-cb5a-4fa1-965e-7b64c90268ce-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.158041 4824 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/239fc97c-cb5a-4fa1-965e-7b64c90268ce-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.398139 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"393dd5ac-a813-412e-ac2d-1d654d3e5c64","Type":"ContainerDied","Data":"7c1c92983cde9f50ed89292c9ba489e6609f2698fffaab5523fc26a4c6ca4f45"} Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.398202 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c1c92983cde9f50ed89292c9ba489e6609f2698fffaab5523fc26a4c6ca4f45" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.398281 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.409331 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq" event={"ID":"239fc97c-cb5a-4fa1-965e-7b64c90268ce","Type":"ContainerDied","Data":"4d0da2c3da00c6dd6cf100ba43dd4048f42c65ed90df04df6d04e96db17f2c53"} Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.409382 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d0da2c3da00c6dd6cf100ba43dd4048f42c65ed90df04df6d04e96db17f2c53" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.409437 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-xxvzq" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.912819 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:09:43 crc kubenswrapper[4824]: [-]has-synced failed: reason withheld Feb 24 00:09:43 crc kubenswrapper[4824]: [+]process-running ok Feb 24 00:09:43 crc kubenswrapper[4824]: healthz check failed Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.913256 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.940684 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-599d8ff48-qktrf"] Feb 24 00:09:43 crc kubenswrapper[4824]: E0224 00:09:43.940911 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="393dd5ac-a813-412e-ac2d-1d654d3e5c64" containerName="pruner" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.940926 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="393dd5ac-a813-412e-ac2d-1d654d3e5c64" containerName="pruner" Feb 24 00:09:43 crc kubenswrapper[4824]: E0224 00:09:43.940949 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="239fc97c-cb5a-4fa1-965e-7b64c90268ce" containerName="collect-profiles" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.940958 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="239fc97c-cb5a-4fa1-965e-7b64c90268ce" containerName="collect-profiles" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.941101 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="393dd5ac-a813-412e-ac2d-1d654d3e5c64" containerName="pruner" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.941122 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="239fc97c-cb5a-4fa1-965e-7b64c90268ce" containerName="collect-profiles" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.941539 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.950861 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.951339 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.959494 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.960783 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.960982 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.967637 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw"] Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.967778 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.968089 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.968612 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.975417 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.975652 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.975891 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.976042 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.976081 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.980093 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-599d8ff48-qktrf"] Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.986004 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ca581b-1f92-4494-ab07-3c56396e862c-serving-cert\") pod \"controller-manager-599d8ff48-qktrf\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.986059 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921fd719-248a-40f2-901e-de82a8c6b9bc-serving-cert\") pod \"route-controller-manager-7968fcb589-gvmfw\" (UID: \"921fd719-248a-40f2-901e-de82a8c6b9bc\") " pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.986086 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9ca581b-1f92-4494-ab07-3c56396e862c-proxy-ca-bundles\") pod \"controller-manager-599d8ff48-qktrf\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.986110 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8k4h\" (UniqueName: \"kubernetes.io/projected/c9ca581b-1f92-4494-ab07-3c56396e862c-kube-api-access-s8k4h\") pod \"controller-manager-599d8ff48-qktrf\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.986139 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/921fd719-248a-40f2-901e-de82a8c6b9bc-config\") pod \"route-controller-manager-7968fcb589-gvmfw\" (UID: \"921fd719-248a-40f2-901e-de82a8c6b9bc\") " pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.986178 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64rs6\" (UniqueName: \"kubernetes.io/projected/921fd719-248a-40f2-901e-de82a8c6b9bc-kube-api-access-64rs6\") pod \"route-controller-manager-7968fcb589-gvmfw\" (UID: \"921fd719-248a-40f2-901e-de82a8c6b9bc\") " pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.986208 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9ca581b-1f92-4494-ab07-3c56396e862c-config\") pod \"controller-manager-599d8ff48-qktrf\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.986224 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9ca581b-1f92-4494-ab07-3c56396e862c-client-ca\") pod \"controller-manager-599d8ff48-qktrf\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.986244 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/921fd719-248a-40f2-901e-de82a8c6b9bc-client-ca\") pod \"route-controller-manager-7968fcb589-gvmfw\" (UID: \"921fd719-248a-40f2-901e-de82a8c6b9bc\") " pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.988109 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 00:09:43 crc kubenswrapper[4824]: I0224 00:09:43.998348 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw"] Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.089758 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64rs6\" (UniqueName: \"kubernetes.io/projected/921fd719-248a-40f2-901e-de82a8c6b9bc-kube-api-access-64rs6\") pod \"route-controller-manager-7968fcb589-gvmfw\" (UID: \"921fd719-248a-40f2-901e-de82a8c6b9bc\") " pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.089827 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9ca581b-1f92-4494-ab07-3c56396e862c-config\") pod \"controller-manager-599d8ff48-qktrf\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.089852 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9ca581b-1f92-4494-ab07-3c56396e862c-client-ca\") pod \"controller-manager-599d8ff48-qktrf\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.089878 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/921fd719-248a-40f2-901e-de82a8c6b9bc-client-ca\") pod \"route-controller-manager-7968fcb589-gvmfw\" (UID: \"921fd719-248a-40f2-901e-de82a8c6b9bc\") " pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.089942 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ca581b-1f92-4494-ab07-3c56396e862c-serving-cert\") pod \"controller-manager-599d8ff48-qktrf\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.089978 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921fd719-248a-40f2-901e-de82a8c6b9bc-serving-cert\") pod \"route-controller-manager-7968fcb589-gvmfw\" (UID: \"921fd719-248a-40f2-901e-de82a8c6b9bc\") " pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.090003 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9ca581b-1f92-4494-ab07-3c56396e862c-proxy-ca-bundles\") pod \"controller-manager-599d8ff48-qktrf\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.090035 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8k4h\" (UniqueName: \"kubernetes.io/projected/c9ca581b-1f92-4494-ab07-3c56396e862c-kube-api-access-s8k4h\") pod \"controller-manager-599d8ff48-qktrf\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.090070 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/921fd719-248a-40f2-901e-de82a8c6b9bc-config\") pod \"route-controller-manager-7968fcb589-gvmfw\" (UID: \"921fd719-248a-40f2-901e-de82a8c6b9bc\") " pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.091643 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/921fd719-248a-40f2-901e-de82a8c6b9bc-client-ca\") pod \"route-controller-manager-7968fcb589-gvmfw\" (UID: \"921fd719-248a-40f2-901e-de82a8c6b9bc\") " pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.091791 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/921fd719-248a-40f2-901e-de82a8c6b9bc-config\") pod \"route-controller-manager-7968fcb589-gvmfw\" (UID: \"921fd719-248a-40f2-901e-de82a8c6b9bc\") " pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.091872 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9ca581b-1f92-4494-ab07-3c56396e862c-config\") pod \"controller-manager-599d8ff48-qktrf\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.092013 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9ca581b-1f92-4494-ab07-3c56396e862c-client-ca\") pod \"controller-manager-599d8ff48-qktrf\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.101020 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9ca581b-1f92-4494-ab07-3c56396e862c-proxy-ca-bundles\") pod \"controller-manager-599d8ff48-qktrf\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.111402 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64rs6\" (UniqueName: \"kubernetes.io/projected/921fd719-248a-40f2-901e-de82a8c6b9bc-kube-api-access-64rs6\") pod \"route-controller-manager-7968fcb589-gvmfw\" (UID: \"921fd719-248a-40f2-901e-de82a8c6b9bc\") " pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.120418 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921fd719-248a-40f2-901e-de82a8c6b9bc-serving-cert\") pod \"route-controller-manager-7968fcb589-gvmfw\" (UID: \"921fd719-248a-40f2-901e-de82a8c6b9bc\") " pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.120434 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ca581b-1f92-4494-ab07-3c56396e862c-serving-cert\") pod \"controller-manager-599d8ff48-qktrf\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.143623 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8k4h\" (UniqueName: \"kubernetes.io/projected/c9ca581b-1f92-4494-ab07-3c56396e862c-kube-api-access-s8k4h\") pod \"controller-manager-599d8ff48-qktrf\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.272983 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.296998 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.612853 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs\") pod \"network-metrics-daemon-98z42\" (UID: \"a648113f-3e46-4170-ba30-7155fefbb413\") " pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.616209 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.630729 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a648113f-3e46-4170-ba30-7155fefbb413-metrics-certs\") pod \"network-metrics-daemon-98z42\" (UID: \"a648113f-3e46-4170-ba30-7155fefbb413\") " pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:09:44 crc kubenswrapper[4824]: W0224 00:09:44.741367 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9ca581b_1f92_4494_ab07_3c56396e862c.slice/crio-3348d9f9c7ede94db1fa63c997fae65b1355a60f270f2e59b5e41ad0ae9f9767 WatchSource:0}: Error finding container 3348d9f9c7ede94db1fa63c997fae65b1355a60f270f2e59b5e41ad0ae9f9767: Status 404 returned error can't find the container with id 3348d9f9c7ede94db1fa63c997fae65b1355a60f270f2e59b5e41ad0ae9f9767 Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.774021 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-599d8ff48-qktrf"] Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.838937 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.847011 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-98z42" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.901860 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:09:44 crc kubenswrapper[4824]: [-]has-synced failed: reason withheld Feb 24 00:09:44 crc kubenswrapper[4824]: [+]process-running ok Feb 24 00:09:44 crc kubenswrapper[4824]: healthz check failed Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.901953 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:44 crc kubenswrapper[4824]: I0224 00:09:44.990717 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw"] Feb 24 00:09:45 crc kubenswrapper[4824]: I0224 00:09:45.326913 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-98z42"] Feb 24 00:09:45 crc kubenswrapper[4824]: W0224 00:09:45.413916 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda648113f_3e46_4170_ba30_7155fefbb413.slice/crio-d290ba414cf75b41454b8c91c765b6bdfd8b8c8bbfe011e5cff0544b7a44506e WatchSource:0}: Error finding container d290ba414cf75b41454b8c91c765b6bdfd8b8c8bbfe011e5cff0544b7a44506e: Status 404 returned error can't find the container with id d290ba414cf75b41454b8c91c765b6bdfd8b8c8bbfe011e5cff0544b7a44506e Feb 24 00:09:45 crc kubenswrapper[4824]: I0224 00:09:45.442075 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-98z42" event={"ID":"a648113f-3e46-4170-ba30-7155fefbb413","Type":"ContainerStarted","Data":"d290ba414cf75b41454b8c91c765b6bdfd8b8c8bbfe011e5cff0544b7a44506e"} Feb 24 00:09:45 crc kubenswrapper[4824]: I0224 00:09:45.457509 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" event={"ID":"c9ca581b-1f92-4494-ab07-3c56396e862c","Type":"ContainerStarted","Data":"61152d725e3742568d9637a36b75cafdf6279903aa2b72696fb965f86fc0262a"} Feb 24 00:09:45 crc kubenswrapper[4824]: I0224 00:09:45.457643 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" event={"ID":"c9ca581b-1f92-4494-ab07-3c56396e862c","Type":"ContainerStarted","Data":"3348d9f9c7ede94db1fa63c997fae65b1355a60f270f2e59b5e41ad0ae9f9767"} Feb 24 00:09:45 crc kubenswrapper[4824]: I0224 00:09:45.459029 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:45 crc kubenswrapper[4824]: I0224 00:09:45.473733 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" event={"ID":"921fd719-248a-40f2-901e-de82a8c6b9bc","Type":"ContainerStarted","Data":"ebe73061062b5fb6815834caff328cf47e5bdaff5958c1bf3a4054a250e190c9"} Feb 24 00:09:45 crc kubenswrapper[4824]: I0224 00:09:45.475638 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:09:45 crc kubenswrapper[4824]: I0224 00:09:45.559926 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" podStartSLOduration=5.559898744 podStartE2EDuration="5.559898744s" podCreationTimestamp="2026-02-24 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:45.516332323 +0000 UTC m=+249.505956792" watchObservedRunningTime="2026-02-24 00:09:45.559898744 +0000 UTC m=+249.549523223" Feb 24 00:09:45 crc kubenswrapper[4824]: I0224 00:09:45.906174 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:09:45 crc kubenswrapper[4824]: [-]has-synced failed: reason withheld Feb 24 00:09:45 crc kubenswrapper[4824]: [+]process-running ok Feb 24 00:09:45 crc kubenswrapper[4824]: healthz check failed Feb 24 00:09:45 crc kubenswrapper[4824]: I0224 00:09:45.906613 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:46 crc kubenswrapper[4824]: I0224 00:09:46.383188 4824 patch_prober.go:28] interesting pod/downloads-7954f5f757-r4c4b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 24 00:09:46 crc kubenswrapper[4824]: I0224 00:09:46.383236 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r4c4b" podUID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 24 00:09:46 crc kubenswrapper[4824]: I0224 00:09:46.383300 4824 patch_prober.go:28] interesting pod/downloads-7954f5f757-r4c4b container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 24 00:09:46 crc kubenswrapper[4824]: I0224 00:09:46.383379 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-r4c4b" podUID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 24 00:09:46 crc kubenswrapper[4824]: I0224 00:09:46.531459 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" event={"ID":"921fd719-248a-40f2-901e-de82a8c6b9bc","Type":"ContainerStarted","Data":"d35da3b92a34ebadb664ea295a072909420d8af61a251763865432159776fa7d"} Feb 24 00:09:46 crc kubenswrapper[4824]: I0224 00:09:46.533423 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" Feb 24 00:09:46 crc kubenswrapper[4824]: I0224 00:09:46.568253 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-98z42" event={"ID":"a648113f-3e46-4170-ba30-7155fefbb413","Type":"ContainerStarted","Data":"f92de97b983ee28ba4c6e2b7ff56d526cf8e45c43a74482e1c082c075f6792a2"} Feb 24 00:09:46 crc kubenswrapper[4824]: I0224 00:09:46.582490 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" podStartSLOduration=6.582468787 podStartE2EDuration="6.582468787s" podCreationTimestamp="2026-02-24 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:46.582234701 +0000 UTC m=+250.571859180" watchObservedRunningTime="2026-02-24 00:09:46.582468787 +0000 UTC m=+250.572093256" Feb 24 00:09:46 crc kubenswrapper[4824]: I0224 00:09:46.660621 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" Feb 24 00:09:46 crc kubenswrapper[4824]: I0224 00:09:46.908875 4824 patch_prober.go:28] interesting pod/router-default-5444994796-fp4wq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:09:46 crc kubenswrapper[4824]: [-]has-synced failed: reason withheld Feb 24 00:09:46 crc kubenswrapper[4824]: [+]process-running ok Feb 24 00:09:46 crc kubenswrapper[4824]: healthz check failed Feb 24 00:09:46 crc kubenswrapper[4824]: I0224 00:09:46.909270 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fp4wq" podUID="5b0ff99f-1e04-4e23-895a-a02a303c8daa" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:09:47 crc kubenswrapper[4824]: I0224 00:09:47.367462 4824 patch_prober.go:28] interesting pod/console-f9d7485db-zlnwh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 24 00:09:47 crc kubenswrapper[4824]: I0224 00:09:47.367547 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zlnwh" podUID="5f4f79cd-ada9-4ec7-b779-94d97bdadc97" containerName="console" probeResult="failure" output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 24 00:09:47 crc kubenswrapper[4824]: I0224 00:09:47.634460 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-98z42" event={"ID":"a648113f-3e46-4170-ba30-7155fefbb413","Type":"ContainerStarted","Data":"e8529f608c6974ce2db44f586758015db9551abecd7edd722b8e5fa02da03cae"} Feb 24 00:09:47 crc kubenswrapper[4824]: I0224 00:09:47.900854 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:47 crc kubenswrapper[4824]: I0224 00:09:47.907640 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-fp4wq" Feb 24 00:09:53 crc kubenswrapper[4824]: I0224 00:09:53.276240 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:09:53 crc kubenswrapper[4824]: I0224 00:09:53.277026 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:09:56 crc kubenswrapper[4824]: I0224 00:09:56.383068 4824 patch_prober.go:28] interesting pod/downloads-7954f5f757-r4c4b container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 24 00:09:56 crc kubenswrapper[4824]: I0224 00:09:56.383579 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-r4c4b" podUID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 24 00:09:56 crc kubenswrapper[4824]: I0224 00:09:56.383634 4824 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-r4c4b" Feb 24 00:09:56 crc kubenswrapper[4824]: I0224 00:09:56.383248 4824 patch_prober.go:28] interesting pod/downloads-7954f5f757-r4c4b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 24 00:09:56 crc kubenswrapper[4824]: I0224 00:09:56.384005 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r4c4b" podUID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 24 00:09:56 crc kubenswrapper[4824]: I0224 00:09:56.384330 4824 patch_prober.go:28] interesting pod/downloads-7954f5f757-r4c4b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 24 00:09:56 crc kubenswrapper[4824]: I0224 00:09:56.384377 4824 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"deb3616bdfcc08678302c0e0617b53f7bdd5f57fee7e5facc2929f3b91c7322b"} pod="openshift-console/downloads-7954f5f757-r4c4b" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 24 00:09:56 crc kubenswrapper[4824]: I0224 00:09:56.384422 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-r4c4b" podUID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerName="download-server" containerID="cri-o://deb3616bdfcc08678302c0e0617b53f7bdd5f57fee7e5facc2929f3b91c7322b" gracePeriod=2 Feb 24 00:09:56 crc kubenswrapper[4824]: I0224 00:09:56.384405 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r4c4b" podUID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 24 00:09:57 crc kubenswrapper[4824]: I0224 00:09:57.370038 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:57 crc kubenswrapper[4824]: I0224 00:09:57.374559 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-zlnwh" Feb 24 00:09:57 crc kubenswrapper[4824]: I0224 00:09:57.391546 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-98z42" podStartSLOduration=187.391507676 podStartE2EDuration="3m7.391507676s" podCreationTimestamp="2026-02-24 00:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:48.679889954 +0000 UTC m=+252.669514443" watchObservedRunningTime="2026-02-24 00:09:57.391507676 +0000 UTC m=+261.381132145" Feb 24 00:09:59 crc kubenswrapper[4824]: I0224 00:09:59.552867 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:09:59 crc kubenswrapper[4824]: I0224 00:09:59.848694 4824 generic.go:334] "Generic (PLEG): container finished" podID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerID="deb3616bdfcc08678302c0e0617b53f7bdd5f57fee7e5facc2929f3b91c7322b" exitCode=0 Feb 24 00:09:59 crc kubenswrapper[4824]: I0224 00:09:59.848771 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-r4c4b" event={"ID":"581e69ae-c21a-4a9e-b1ea-9c38256d7b30","Type":"ContainerDied","Data":"deb3616bdfcc08678302c0e0617b53f7bdd5f57fee7e5facc2929f3b91c7322b"} Feb 24 00:10:06 crc kubenswrapper[4824]: I0224 00:10:06.384120 4824 patch_prober.go:28] interesting pod/downloads-7954f5f757-r4c4b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 24 00:10:06 crc kubenswrapper[4824]: I0224 00:10:06.384535 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r4c4b" podUID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 24 00:10:07 crc kubenswrapper[4824]: I0224 00:10:07.352014 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2jqvq" Feb 24 00:10:13 crc kubenswrapper[4824]: I0224 00:10:13.929246 4824 generic.go:334] "Generic (PLEG): container finished" podID="f09bc4be-bc94-4c63-93ec-4bc2fef07d1b" containerID="6b2ac39d85326d80c4e57096bd6873f9064eac38a27f5eddf04bd260901e4edf" exitCode=0 Feb 24 00:10:13 crc kubenswrapper[4824]: I0224 00:10:13.929329 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29531520-969xh" event={"ID":"f09bc4be-bc94-4c63-93ec-4bc2fef07d1b","Type":"ContainerDied","Data":"6b2ac39d85326d80c4e57096bd6873f9064eac38a27f5eddf04bd260901e4edf"} Feb 24 00:10:15 crc kubenswrapper[4824]: I0224 00:10:15.755124 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 24 00:10:15 crc kubenswrapper[4824]: I0224 00:10:15.756544 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:10:15 crc kubenswrapper[4824]: I0224 00:10:15.759704 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 24 00:10:15 crc kubenswrapper[4824]: I0224 00:10:15.760078 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 00:10:15 crc kubenswrapper[4824]: I0224 00:10:15.771364 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 24 00:10:15 crc kubenswrapper[4824]: I0224 00:10:15.904743 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3174486-c5bc-4ef6-925d-70554d62d1f9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a3174486-c5bc-4ef6-925d-70554d62d1f9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:10:15 crc kubenswrapper[4824]: I0224 00:10:15.904811 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3174486-c5bc-4ef6-925d-70554d62d1f9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a3174486-c5bc-4ef6-925d-70554d62d1f9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:10:16 crc kubenswrapper[4824]: I0224 00:10:16.007208 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3174486-c5bc-4ef6-925d-70554d62d1f9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a3174486-c5bc-4ef6-925d-70554d62d1f9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:10:16 crc kubenswrapper[4824]: I0224 00:10:16.007320 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3174486-c5bc-4ef6-925d-70554d62d1f9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a3174486-c5bc-4ef6-925d-70554d62d1f9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:10:16 crc kubenswrapper[4824]: I0224 00:10:16.007381 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3174486-c5bc-4ef6-925d-70554d62d1f9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a3174486-c5bc-4ef6-925d-70554d62d1f9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:10:16 crc kubenswrapper[4824]: I0224 00:10:16.026068 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3174486-c5bc-4ef6-925d-70554d62d1f9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a3174486-c5bc-4ef6-925d-70554d62d1f9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:10:16 crc kubenswrapper[4824]: I0224 00:10:16.124109 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:10:16 crc kubenswrapper[4824]: I0224 00:10:16.385236 4824 patch_prober.go:28] interesting pod/downloads-7954f5f757-r4c4b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 24 00:10:16 crc kubenswrapper[4824]: I0224 00:10:16.385309 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r4c4b" podUID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 24 00:10:16 crc kubenswrapper[4824]: I0224 00:10:16.638619 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29531520-969xh" Feb 24 00:10:16 crc kubenswrapper[4824]: I0224 00:10:16.716421 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc987\" (UniqueName: \"kubernetes.io/projected/f09bc4be-bc94-4c63-93ec-4bc2fef07d1b-kube-api-access-qc987\") pod \"f09bc4be-bc94-4c63-93ec-4bc2fef07d1b\" (UID: \"f09bc4be-bc94-4c63-93ec-4bc2fef07d1b\") " Feb 24 00:10:16 crc kubenswrapper[4824]: I0224 00:10:16.716566 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f09bc4be-bc94-4c63-93ec-4bc2fef07d1b-serviceca\") pod \"f09bc4be-bc94-4c63-93ec-4bc2fef07d1b\" (UID: \"f09bc4be-bc94-4c63-93ec-4bc2fef07d1b\") " Feb 24 00:10:16 crc kubenswrapper[4824]: I0224 00:10:16.717387 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f09bc4be-bc94-4c63-93ec-4bc2fef07d1b-serviceca" (OuterVolumeSpecName: "serviceca") pod "f09bc4be-bc94-4c63-93ec-4bc2fef07d1b" (UID: "f09bc4be-bc94-4c63-93ec-4bc2fef07d1b"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:16 crc kubenswrapper[4824]: I0224 00:10:16.720117 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f09bc4be-bc94-4c63-93ec-4bc2fef07d1b-kube-api-access-qc987" (OuterVolumeSpecName: "kube-api-access-qc987") pod "f09bc4be-bc94-4c63-93ec-4bc2fef07d1b" (UID: "f09bc4be-bc94-4c63-93ec-4bc2fef07d1b"). InnerVolumeSpecName "kube-api-access-qc987". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:10:16 crc kubenswrapper[4824]: I0224 00:10:16.818482 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc987\" (UniqueName: \"kubernetes.io/projected/f09bc4be-bc94-4c63-93ec-4bc2fef07d1b-kube-api-access-qc987\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:16 crc kubenswrapper[4824]: I0224 00:10:16.818561 4824 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f09bc4be-bc94-4c63-93ec-4bc2fef07d1b-serviceca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:16 crc kubenswrapper[4824]: I0224 00:10:16.950793 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29531520-969xh" event={"ID":"f09bc4be-bc94-4c63-93ec-4bc2fef07d1b","Type":"ContainerDied","Data":"4fbc9ac5dc3c69b5711041434cb98b9bdb56115d74b5092502a2732ff4babe43"} Feb 24 00:10:16 crc kubenswrapper[4824]: I0224 00:10:16.951234 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fbc9ac5dc3c69b5711041434cb98b9bdb56115d74b5092502a2732ff4babe43" Feb 24 00:10:16 crc kubenswrapper[4824]: I0224 00:10:16.950847 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29531520-969xh" Feb 24 00:10:20 crc kubenswrapper[4824]: I0224 00:10:20.552567 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 24 00:10:20 crc kubenswrapper[4824]: E0224 00:10:20.553750 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09bc4be-bc94-4c63-93ec-4bc2fef07d1b" containerName="image-pruner" Feb 24 00:10:20 crc kubenswrapper[4824]: I0224 00:10:20.553768 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09bc4be-bc94-4c63-93ec-4bc2fef07d1b" containerName="image-pruner" Feb 24 00:10:20 crc kubenswrapper[4824]: I0224 00:10:20.553928 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="f09bc4be-bc94-4c63-93ec-4bc2fef07d1b" containerName="image-pruner" Feb 24 00:10:20 crc kubenswrapper[4824]: I0224 00:10:20.554530 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:10:20 crc kubenswrapper[4824]: I0224 00:10:20.566107 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 24 00:10:20 crc kubenswrapper[4824]: I0224 00:10:20.680415 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/466928f3-88e1-4111-8358-13db2bd5ba58-kube-api-access\") pod \"installer-9-crc\" (UID: \"466928f3-88e1-4111-8358-13db2bd5ba58\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:10:20 crc kubenswrapper[4824]: I0224 00:10:20.680539 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/466928f3-88e1-4111-8358-13db2bd5ba58-kubelet-dir\") pod \"installer-9-crc\" (UID: \"466928f3-88e1-4111-8358-13db2bd5ba58\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:10:20 crc kubenswrapper[4824]: I0224 00:10:20.680579 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/466928f3-88e1-4111-8358-13db2bd5ba58-var-lock\") pod \"installer-9-crc\" (UID: \"466928f3-88e1-4111-8358-13db2bd5ba58\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:10:20 crc kubenswrapper[4824]: I0224 00:10:20.781655 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/466928f3-88e1-4111-8358-13db2bd5ba58-kubelet-dir\") pod \"installer-9-crc\" (UID: \"466928f3-88e1-4111-8358-13db2bd5ba58\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:10:20 crc kubenswrapper[4824]: I0224 00:10:20.781724 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/466928f3-88e1-4111-8358-13db2bd5ba58-var-lock\") pod \"installer-9-crc\" (UID: \"466928f3-88e1-4111-8358-13db2bd5ba58\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:10:20 crc kubenswrapper[4824]: I0224 00:10:20.781819 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/466928f3-88e1-4111-8358-13db2bd5ba58-kube-api-access\") pod \"installer-9-crc\" (UID: \"466928f3-88e1-4111-8358-13db2bd5ba58\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:10:20 crc kubenswrapper[4824]: I0224 00:10:20.781877 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/466928f3-88e1-4111-8358-13db2bd5ba58-kubelet-dir\") pod \"installer-9-crc\" (UID: \"466928f3-88e1-4111-8358-13db2bd5ba58\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:10:20 crc kubenswrapper[4824]: I0224 00:10:20.781994 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/466928f3-88e1-4111-8358-13db2bd5ba58-var-lock\") pod \"installer-9-crc\" (UID: \"466928f3-88e1-4111-8358-13db2bd5ba58\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:10:20 crc kubenswrapper[4824]: I0224 00:10:20.820666 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/466928f3-88e1-4111-8358-13db2bd5ba58-kube-api-access\") pod \"installer-9-crc\" (UID: \"466928f3-88e1-4111-8358-13db2bd5ba58\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:10:20 crc kubenswrapper[4824]: I0224 00:10:20.885892 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:10:21 crc kubenswrapper[4824]: E0224 00:10:21.387123 4824 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 24 00:10:21 crc kubenswrapper[4824]: E0224 00:10:21.387346 4824 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fhvh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gl27t_openshift-marketplace(2da73289-3f96-4828-a106-46c3b0469e7d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 00:10:21 crc kubenswrapper[4824]: E0224 00:10:21.388496 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-gl27t" podUID="2da73289-3f96-4828-a106-46c3b0469e7d" Feb 24 00:10:22 crc kubenswrapper[4824]: E0224 00:10:22.909597 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gl27t" podUID="2da73289-3f96-4828-a106-46c3b0469e7d" Feb 24 00:10:23 crc kubenswrapper[4824]: E0224 00:10:23.006227 4824 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 24 00:10:23 crc kubenswrapper[4824]: E0224 00:10:23.006450 4824 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c94kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6kgrd_openshift-marketplace(b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 00:10:23 crc kubenswrapper[4824]: E0224 00:10:23.008081 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6kgrd" podUID="b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4" Feb 24 00:10:23 crc kubenswrapper[4824]: I0224 00:10:23.276782 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:10:23 crc kubenswrapper[4824]: I0224 00:10:23.276906 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:10:23 crc kubenswrapper[4824]: I0224 00:10:23.277003 4824 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:10:23 crc kubenswrapper[4824]: I0224 00:10:23.278185 4824 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3"} pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:10:23 crc kubenswrapper[4824]: I0224 00:10:23.278357 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" containerID="cri-o://13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3" gracePeriod=600 Feb 24 00:10:23 crc kubenswrapper[4824]: I0224 00:10:23.993707 4824 generic.go:334] "Generic (PLEG): container finished" podID="939ca085-9383-42e6-b7d6-37f101137273" containerID="13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3" exitCode=0 Feb 24 00:10:23 crc kubenswrapper[4824]: I0224 00:10:23.993794 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerDied","Data":"13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3"} Feb 24 00:10:24 crc kubenswrapper[4824]: E0224 00:10:24.155199 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6kgrd" podUID="b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4" Feb 24 00:10:24 crc kubenswrapper[4824]: E0224 00:10:24.221452 4824 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 24 00:10:24 crc kubenswrapper[4824]: E0224 00:10:24.221668 4824 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hggqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-nzqwf_openshift-marketplace(b142d96b-87c3-444b-b135-fdddaa658234): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 00:10:24 crc kubenswrapper[4824]: E0224 00:10:24.224288 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-nzqwf" podUID="b142d96b-87c3-444b-b135-fdddaa658234" Feb 24 00:10:24 crc kubenswrapper[4824]: E0224 00:10:24.247208 4824 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 24 00:10:24 crc kubenswrapper[4824]: E0224 00:10:24.247400 4824 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zc6ds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zxplg_openshift-marketplace(7a78c7d6-6ec6-4857-af87-25c5c8cf961d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 00:10:24 crc kubenswrapper[4824]: E0224 00:10:24.249636 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zxplg" podUID="7a78c7d6-6ec6-4857-af87-25c5c8cf961d" Feb 24 00:10:25 crc kubenswrapper[4824]: E0224 00:10:25.556697 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-nzqwf" podUID="b142d96b-87c3-444b-b135-fdddaa658234" Feb 24 00:10:25 crc kubenswrapper[4824]: E0224 00:10:25.556738 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zxplg" podUID="7a78c7d6-6ec6-4857-af87-25c5c8cf961d" Feb 24 00:10:25 crc kubenswrapper[4824]: E0224 00:10:25.657613 4824 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 24 00:10:25 crc kubenswrapper[4824]: E0224 00:10:25.657815 4824 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t895b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bfhcg_openshift-marketplace(b00860ed-9085-40bb-9041-16eac6d88fb1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 00:10:25 crc kubenswrapper[4824]: E0224 00:10:25.659382 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bfhcg" podUID="b00860ed-9085-40bb-9041-16eac6d88fb1" Feb 24 00:10:25 crc kubenswrapper[4824]: E0224 00:10:25.696897 4824 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 24 00:10:25 crc kubenswrapper[4824]: E0224 00:10:25.697297 4824 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndbcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-hhftg_openshift-marketplace(3e306ddf-071d-47f2-b9b1-bf772963438e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 00:10:25 crc kubenswrapper[4824]: E0224 00:10:25.698508 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-hhftg" podUID="3e306ddf-071d-47f2-b9b1-bf772963438e" Feb 24 00:10:25 crc kubenswrapper[4824]: E0224 00:10:25.709881 4824 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 24 00:10:25 crc kubenswrapper[4824]: E0224 00:10:25.710069 4824 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8k4qw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mfzkw_openshift-marketplace(08de7fe0-2d54-408b-8e09-3e1b9bcf931a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 00:10:25 crc kubenswrapper[4824]: E0224 00:10:25.715052 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mfzkw" podUID="08de7fe0-2d54-408b-8e09-3e1b9bcf931a" Feb 24 00:10:25 crc kubenswrapper[4824]: E0224 00:10:25.724244 4824 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 24 00:10:25 crc kubenswrapper[4824]: E0224 00:10:25.727429 4824 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bxnlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dmjz7_openshift-marketplace(cc119514-5c95-4925-8a1a-3e6844a34e1e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 00:10:25 crc kubenswrapper[4824]: E0224 00:10:25.730909 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dmjz7" podUID="cc119514-5c95-4925-8a1a-3e6844a34e1e" Feb 24 00:10:25 crc kubenswrapper[4824]: I0224 00:10:25.879592 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 24 00:10:25 crc kubenswrapper[4824]: I0224 00:10:25.937652 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 24 00:10:25 crc kubenswrapper[4824]: I0224 00:10:25.986719 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jf5jw"] Feb 24 00:10:26 crc kubenswrapper[4824]: I0224 00:10:26.007791 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a3174486-c5bc-4ef6-925d-70554d62d1f9","Type":"ContainerStarted","Data":"1c14510da4c8e4a0e7f0e53a8715b277a09e7b810b82c332501cf22906c6321c"} Feb 24 00:10:26 crc kubenswrapper[4824]: I0224 00:10:26.010125 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerStarted","Data":"ec5f29f7aaf13391c2278f1eb972e5c2f9ed40d998b7f6d08d6d97e54173df94"} Feb 24 00:10:26 crc kubenswrapper[4824]: I0224 00:10:26.017928 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"466928f3-88e1-4111-8358-13db2bd5ba58","Type":"ContainerStarted","Data":"3e14bb11973a03d1681cf1c9d6b14d165f03b40a1a2b66d541ff08ad0753d14f"} Feb 24 00:10:26 crc kubenswrapper[4824]: I0224 00:10:26.021258 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-r4c4b" event={"ID":"581e69ae-c21a-4a9e-b1ea-9c38256d7b30","Type":"ContainerStarted","Data":"5380aecc20f70dc2f48069e8ddfce36ada45db6c57f2622f028b0cec6c77999b"} Feb 24 00:10:26 crc kubenswrapper[4824]: I0224 00:10:26.022128 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-r4c4b" Feb 24 00:10:26 crc kubenswrapper[4824]: I0224 00:10:26.024713 4824 patch_prober.go:28] interesting pod/downloads-7954f5f757-r4c4b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 24 00:10:26 crc kubenswrapper[4824]: I0224 00:10:26.024785 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r4c4b" podUID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[4824]: E0224 00:10:26.036931 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-hhftg" podUID="3e306ddf-071d-47f2-b9b1-bf772963438e" Feb 24 00:10:26 crc kubenswrapper[4824]: E0224 00:10:26.037309 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bfhcg" podUID="b00860ed-9085-40bb-9041-16eac6d88fb1" Feb 24 00:10:26 crc kubenswrapper[4824]: E0224 00:10:26.037378 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mfzkw" podUID="08de7fe0-2d54-408b-8e09-3e1b9bcf931a" Feb 24 00:10:26 crc kubenswrapper[4824]: E0224 00:10:26.037446 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dmjz7" podUID="cc119514-5c95-4925-8a1a-3e6844a34e1e" Feb 24 00:10:26 crc kubenswrapper[4824]: I0224 00:10:26.383319 4824 patch_prober.go:28] interesting pod/downloads-7954f5f757-r4c4b container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 24 00:10:26 crc kubenswrapper[4824]: I0224 00:10:26.383346 4824 patch_prober.go:28] interesting pod/downloads-7954f5f757-r4c4b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 24 00:10:26 crc kubenswrapper[4824]: I0224 00:10:26.383849 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-r4c4b" podUID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 24 00:10:26 crc kubenswrapper[4824]: I0224 00:10:26.383911 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r4c4b" podUID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 24 00:10:27 crc kubenswrapper[4824]: I0224 00:10:27.048321 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"466928f3-88e1-4111-8358-13db2bd5ba58","Type":"ContainerStarted","Data":"713672e1f8b7451df74461c3a60e57bbab1ffd950fc9f64d1a805ac3787f3127"} Feb 24 00:10:27 crc kubenswrapper[4824]: I0224 00:10:27.054108 4824 generic.go:334] "Generic (PLEG): container finished" podID="a3174486-c5bc-4ef6-925d-70554d62d1f9" containerID="33248aff8f19a69ca84b1e0b96ee793ea313f709995ac8ddf21c077968256a9e" exitCode=0 Feb 24 00:10:27 crc kubenswrapper[4824]: I0224 00:10:27.054405 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a3174486-c5bc-4ef6-925d-70554d62d1f9","Type":"ContainerDied","Data":"33248aff8f19a69ca84b1e0b96ee793ea313f709995ac8ddf21c077968256a9e"} Feb 24 00:10:27 crc kubenswrapper[4824]: I0224 00:10:27.055855 4824 patch_prober.go:28] interesting pod/downloads-7954f5f757-r4c4b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 24 00:10:27 crc kubenswrapper[4824]: I0224 00:10:27.055910 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r4c4b" podUID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 24 00:10:27 crc kubenswrapper[4824]: I0224 00:10:27.072860 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=7.07283417 podStartE2EDuration="7.07283417s" podCreationTimestamp="2026-02-24 00:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:27.071118153 +0000 UTC m=+291.060742622" watchObservedRunningTime="2026-02-24 00:10:27.07283417 +0000 UTC m=+291.062458629" Feb 24 00:10:28 crc kubenswrapper[4824]: I0224 00:10:28.061371 4824 patch_prober.go:28] interesting pod/downloads-7954f5f757-r4c4b container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Feb 24 00:10:28 crc kubenswrapper[4824]: I0224 00:10:28.061937 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r4c4b" podUID="581e69ae-c21a-4a9e-b1ea-9c38256d7b30" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Feb 24 00:10:28 crc kubenswrapper[4824]: I0224 00:10:28.313365 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:10:28 crc kubenswrapper[4824]: I0224 00:10:28.400349 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3174486-c5bc-4ef6-925d-70554d62d1f9-kubelet-dir\") pod \"a3174486-c5bc-4ef6-925d-70554d62d1f9\" (UID: \"a3174486-c5bc-4ef6-925d-70554d62d1f9\") " Feb 24 00:10:28 crc kubenswrapper[4824]: I0224 00:10:28.400452 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3174486-c5bc-4ef6-925d-70554d62d1f9-kube-api-access\") pod \"a3174486-c5bc-4ef6-925d-70554d62d1f9\" (UID: \"a3174486-c5bc-4ef6-925d-70554d62d1f9\") " Feb 24 00:10:28 crc kubenswrapper[4824]: I0224 00:10:28.400575 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3174486-c5bc-4ef6-925d-70554d62d1f9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a3174486-c5bc-4ef6-925d-70554d62d1f9" (UID: "a3174486-c5bc-4ef6-925d-70554d62d1f9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:10:28 crc kubenswrapper[4824]: I0224 00:10:28.400819 4824 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3174486-c5bc-4ef6-925d-70554d62d1f9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:28 crc kubenswrapper[4824]: I0224 00:10:28.409922 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3174486-c5bc-4ef6-925d-70554d62d1f9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a3174486-c5bc-4ef6-925d-70554d62d1f9" (UID: "a3174486-c5bc-4ef6-925d-70554d62d1f9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:10:28 crc kubenswrapper[4824]: I0224 00:10:28.502250 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3174486-c5bc-4ef6-925d-70554d62d1f9-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:29 crc kubenswrapper[4824]: I0224 00:10:29.070876 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a3174486-c5bc-4ef6-925d-70554d62d1f9","Type":"ContainerDied","Data":"1c14510da4c8e4a0e7f0e53a8715b277a09e7b810b82c332501cf22906c6321c"} Feb 24 00:10:29 crc kubenswrapper[4824]: I0224 00:10:29.071411 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c14510da4c8e4a0e7f0e53a8715b277a09e7b810b82c332501cf22906c6321c" Feb 24 00:10:29 crc kubenswrapper[4824]: I0224 00:10:29.070984 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:10:36 crc kubenswrapper[4824]: I0224 00:10:36.397026 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-r4c4b" Feb 24 00:10:36 crc kubenswrapper[4824]: I0224 00:10:36.478254 4824 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 24 00:10:37 crc kubenswrapper[4824]: I0224 00:10:37.227874 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-599d8ff48-qktrf"] Feb 24 00:10:37 crc kubenswrapper[4824]: I0224 00:10:37.228323 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" podUID="c9ca581b-1f92-4494-ab07-3c56396e862c" containerName="controller-manager" containerID="cri-o://61152d725e3742568d9637a36b75cafdf6279903aa2b72696fb965f86fc0262a" gracePeriod=30 Feb 24 00:10:37 crc kubenswrapper[4824]: I0224 00:10:37.337332 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw"] Feb 24 00:10:37 crc kubenswrapper[4824]: I0224 00:10:37.338279 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" podUID="921fd719-248a-40f2-901e-de82a8c6b9bc" containerName="route-controller-manager" containerID="cri-o://d35da3b92a34ebadb664ea295a072909420d8af61a251763865432159776fa7d" gracePeriod=30 Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.123784 4824 generic.go:334] "Generic (PLEG): container finished" podID="921fd719-248a-40f2-901e-de82a8c6b9bc" containerID="d35da3b92a34ebadb664ea295a072909420d8af61a251763865432159776fa7d" exitCode=0 Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.124133 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" event={"ID":"921fd719-248a-40f2-901e-de82a8c6b9bc","Type":"ContainerDied","Data":"d35da3b92a34ebadb664ea295a072909420d8af61a251763865432159776fa7d"} Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.126842 4824 generic.go:334] "Generic (PLEG): container finished" podID="c9ca581b-1f92-4494-ab07-3c56396e862c" containerID="61152d725e3742568d9637a36b75cafdf6279903aa2b72696fb965f86fc0262a" exitCode=0 Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.126887 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" event={"ID":"c9ca581b-1f92-4494-ab07-3c56396e862c","Type":"ContainerDied","Data":"61152d725e3742568d9637a36b75cafdf6279903aa2b72696fb965f86fc0262a"} Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.291993 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.337828 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8"] Feb 24 00:10:39 crc kubenswrapper[4824]: E0224 00:10:39.338719 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921fd719-248a-40f2-901e-de82a8c6b9bc" containerName="route-controller-manager" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.338751 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="921fd719-248a-40f2-901e-de82a8c6b9bc" containerName="route-controller-manager" Feb 24 00:10:39 crc kubenswrapper[4824]: E0224 00:10:39.338769 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3174486-c5bc-4ef6-925d-70554d62d1f9" containerName="pruner" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.338782 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3174486-c5bc-4ef6-925d-70554d62d1f9" containerName="pruner" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.338916 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="921fd719-248a-40f2-901e-de82a8c6b9bc" containerName="route-controller-manager" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.338936 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3174486-c5bc-4ef6-925d-70554d62d1f9" containerName="pruner" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.339481 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.345567 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8"] Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.380995 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64rs6\" (UniqueName: \"kubernetes.io/projected/921fd719-248a-40f2-901e-de82a8c6b9bc-kube-api-access-64rs6\") pod \"921fd719-248a-40f2-901e-de82a8c6b9bc\" (UID: \"921fd719-248a-40f2-901e-de82a8c6b9bc\") " Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.382192 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/921fd719-248a-40f2-901e-de82a8c6b9bc-config\") pod \"921fd719-248a-40f2-901e-de82a8c6b9bc\" (UID: \"921fd719-248a-40f2-901e-de82a8c6b9bc\") " Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.382264 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/921fd719-248a-40f2-901e-de82a8c6b9bc-client-ca\") pod \"921fd719-248a-40f2-901e-de82a8c6b9bc\" (UID: \"921fd719-248a-40f2-901e-de82a8c6b9bc\") " Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.382379 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921fd719-248a-40f2-901e-de82a8c6b9bc-serving-cert\") pod \"921fd719-248a-40f2-901e-de82a8c6b9bc\" (UID: \"921fd719-248a-40f2-901e-de82a8c6b9bc\") " Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.383163 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/921fd719-248a-40f2-901e-de82a8c6b9bc-client-ca" (OuterVolumeSpecName: "client-ca") pod "921fd719-248a-40f2-901e-de82a8c6b9bc" (UID: "921fd719-248a-40f2-901e-de82a8c6b9bc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.383508 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/921fd719-248a-40f2-901e-de82a8c6b9bc-config" (OuterVolumeSpecName: "config") pod "921fd719-248a-40f2-901e-de82a8c6b9bc" (UID: "921fd719-248a-40f2-901e-de82a8c6b9bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.388231 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921fd719-248a-40f2-901e-de82a8c6b9bc-kube-api-access-64rs6" (OuterVolumeSpecName: "kube-api-access-64rs6") pod "921fd719-248a-40f2-901e-de82a8c6b9bc" (UID: "921fd719-248a-40f2-901e-de82a8c6b9bc"). InnerVolumeSpecName "kube-api-access-64rs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.395231 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921fd719-248a-40f2-901e-de82a8c6b9bc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "921fd719-248a-40f2-901e-de82a8c6b9bc" (UID: "921fd719-248a-40f2-901e-de82a8c6b9bc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.484141 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c800062-d998-4df3-97e1-ca5df1a57de9-config\") pod \"route-controller-manager-7f4db755ff-dfrn8\" (UID: \"1c800062-d998-4df3-97e1-ca5df1a57de9\") " pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.484441 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c800062-d998-4df3-97e1-ca5df1a57de9-client-ca\") pod \"route-controller-manager-7f4db755ff-dfrn8\" (UID: \"1c800062-d998-4df3-97e1-ca5df1a57de9\") " pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.484619 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c800062-d998-4df3-97e1-ca5df1a57de9-serving-cert\") pod \"route-controller-manager-7f4db755ff-dfrn8\" (UID: \"1c800062-d998-4df3-97e1-ca5df1a57de9\") " pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.484713 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45tz4\" (UniqueName: \"kubernetes.io/projected/1c800062-d998-4df3-97e1-ca5df1a57de9-kube-api-access-45tz4\") pod \"route-controller-manager-7f4db755ff-dfrn8\" (UID: \"1c800062-d998-4df3-97e1-ca5df1a57de9\") " pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.485233 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921fd719-248a-40f2-901e-de82a8c6b9bc-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.485289 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64rs6\" (UniqueName: \"kubernetes.io/projected/921fd719-248a-40f2-901e-de82a8c6b9bc-kube-api-access-64rs6\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.485316 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/921fd719-248a-40f2-901e-de82a8c6b9bc-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.485329 4824 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/921fd719-248a-40f2-901e-de82a8c6b9bc-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.586770 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c800062-d998-4df3-97e1-ca5df1a57de9-client-ca\") pod \"route-controller-manager-7f4db755ff-dfrn8\" (UID: \"1c800062-d998-4df3-97e1-ca5df1a57de9\") " pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.586821 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c800062-d998-4df3-97e1-ca5df1a57de9-serving-cert\") pod \"route-controller-manager-7f4db755ff-dfrn8\" (UID: \"1c800062-d998-4df3-97e1-ca5df1a57de9\") " pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.586851 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45tz4\" (UniqueName: \"kubernetes.io/projected/1c800062-d998-4df3-97e1-ca5df1a57de9-kube-api-access-45tz4\") pod \"route-controller-manager-7f4db755ff-dfrn8\" (UID: \"1c800062-d998-4df3-97e1-ca5df1a57de9\") " pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.586947 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c800062-d998-4df3-97e1-ca5df1a57de9-config\") pod \"route-controller-manager-7f4db755ff-dfrn8\" (UID: \"1c800062-d998-4df3-97e1-ca5df1a57de9\") " pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.587973 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c800062-d998-4df3-97e1-ca5df1a57de9-client-ca\") pod \"route-controller-manager-7f4db755ff-dfrn8\" (UID: \"1c800062-d998-4df3-97e1-ca5df1a57de9\") " pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.589953 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c800062-d998-4df3-97e1-ca5df1a57de9-config\") pod \"route-controller-manager-7f4db755ff-dfrn8\" (UID: \"1c800062-d998-4df3-97e1-ca5df1a57de9\") " pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.591665 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c800062-d998-4df3-97e1-ca5df1a57de9-serving-cert\") pod \"route-controller-manager-7f4db755ff-dfrn8\" (UID: \"1c800062-d998-4df3-97e1-ca5df1a57de9\") " pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.604296 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45tz4\" (UniqueName: \"kubernetes.io/projected/1c800062-d998-4df3-97e1-ca5df1a57de9-kube-api-access-45tz4\") pod \"route-controller-manager-7f4db755ff-dfrn8\" (UID: \"1c800062-d998-4df3-97e1-ca5df1a57de9\") " pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.627363 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.665944 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.789596 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9ca581b-1f92-4494-ab07-3c56396e862c-client-ca\") pod \"c9ca581b-1f92-4494-ab07-3c56396e862c\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.790089 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ca581b-1f92-4494-ab07-3c56396e862c-serving-cert\") pod \"c9ca581b-1f92-4494-ab07-3c56396e862c\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.790144 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9ca581b-1f92-4494-ab07-3c56396e862c-proxy-ca-bundles\") pod \"c9ca581b-1f92-4494-ab07-3c56396e862c\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.790236 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9ca581b-1f92-4494-ab07-3c56396e862c-config\") pod \"c9ca581b-1f92-4494-ab07-3c56396e862c\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.790266 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8k4h\" (UniqueName: \"kubernetes.io/projected/c9ca581b-1f92-4494-ab07-3c56396e862c-kube-api-access-s8k4h\") pod \"c9ca581b-1f92-4494-ab07-3c56396e862c\" (UID: \"c9ca581b-1f92-4494-ab07-3c56396e862c\") " Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.791818 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9ca581b-1f92-4494-ab07-3c56396e862c-client-ca" (OuterVolumeSpecName: "client-ca") pod "c9ca581b-1f92-4494-ab07-3c56396e862c" (UID: "c9ca581b-1f92-4494-ab07-3c56396e862c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.793665 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9ca581b-1f92-4494-ab07-3c56396e862c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c9ca581b-1f92-4494-ab07-3c56396e862c" (UID: "c9ca581b-1f92-4494-ab07-3c56396e862c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.794149 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9ca581b-1f92-4494-ab07-3c56396e862c-config" (OuterVolumeSpecName: "config") pod "c9ca581b-1f92-4494-ab07-3c56396e862c" (UID: "c9ca581b-1f92-4494-ab07-3c56396e862c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.797885 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9ca581b-1f92-4494-ab07-3c56396e862c-kube-api-access-s8k4h" (OuterVolumeSpecName: "kube-api-access-s8k4h") pod "c9ca581b-1f92-4494-ab07-3c56396e862c" (UID: "c9ca581b-1f92-4494-ab07-3c56396e862c"). InnerVolumeSpecName "kube-api-access-s8k4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.798256 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9ca581b-1f92-4494-ab07-3c56396e862c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c9ca581b-1f92-4494-ab07-3c56396e862c" (UID: "c9ca581b-1f92-4494-ab07-3c56396e862c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.892830 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9ca581b-1f92-4494-ab07-3c56396e862c-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.892892 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8k4h\" (UniqueName: \"kubernetes.io/projected/c9ca581b-1f92-4494-ab07-3c56396e862c-kube-api-access-s8k4h\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.892910 4824 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9ca581b-1f92-4494-ab07-3c56396e862c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.892922 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ca581b-1f92-4494-ab07-3c56396e862c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:39 crc kubenswrapper[4824]: I0224 00:10:39.892931 4824 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9ca581b-1f92-4494-ab07-3c56396e862c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:40 crc kubenswrapper[4824]: I0224 00:10:40.145638 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8"] Feb 24 00:10:40 crc kubenswrapper[4824]: I0224 00:10:40.149285 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" Feb 24 00:10:40 crc kubenswrapper[4824]: I0224 00:10:40.149688 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-599d8ff48-qktrf" event={"ID":"c9ca581b-1f92-4494-ab07-3c56396e862c","Type":"ContainerDied","Data":"3348d9f9c7ede94db1fa63c997fae65b1355a60f270f2e59b5e41ad0ae9f9767"} Feb 24 00:10:40 crc kubenswrapper[4824]: I0224 00:10:40.149856 4824 scope.go:117] "RemoveContainer" containerID="61152d725e3742568d9637a36b75cafdf6279903aa2b72696fb965f86fc0262a" Feb 24 00:10:40 crc kubenswrapper[4824]: I0224 00:10:40.153472 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl27t" event={"ID":"2da73289-3f96-4828-a106-46c3b0469e7d","Type":"ContainerStarted","Data":"09e81517976ec38b505938bb2df2f3b6123c4b30e798621cb83825dcef2c35b1"} Feb 24 00:10:40 crc kubenswrapper[4824]: I0224 00:10:40.164360 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" Feb 24 00:10:40 crc kubenswrapper[4824]: I0224 00:10:40.164341 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw" event={"ID":"921fd719-248a-40f2-901e-de82a8c6b9bc","Type":"ContainerDied","Data":"ebe73061062b5fb6815834caff328cf47e5bdaff5958c1bf3a4054a250e190c9"} Feb 24 00:10:40 crc kubenswrapper[4824]: I0224 00:10:40.177475 4824 scope.go:117] "RemoveContainer" containerID="d35da3b92a34ebadb664ea295a072909420d8af61a251763865432159776fa7d" Feb 24 00:10:40 crc kubenswrapper[4824]: I0224 00:10:40.223058 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw"] Feb 24 00:10:40 crc kubenswrapper[4824]: I0224 00:10:40.230429 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7968fcb589-gvmfw"] Feb 24 00:10:40 crc kubenswrapper[4824]: I0224 00:10:40.234401 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-599d8ff48-qktrf"] Feb 24 00:10:40 crc kubenswrapper[4824]: I0224 00:10:40.236822 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-599d8ff48-qktrf"] Feb 24 00:10:40 crc kubenswrapper[4824]: I0224 00:10:40.708284 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="921fd719-248a-40f2-901e-de82a8c6b9bc" path="/var/lib/kubelet/pods/921fd719-248a-40f2-901e-de82a8c6b9bc/volumes" Feb 24 00:10:40 crc kubenswrapper[4824]: I0224 00:10:40.708911 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ca581b-1f92-4494-ab07-3c56396e862c" path="/var/lib/kubelet/pods/c9ca581b-1f92-4494-ab07-3c56396e862c/volumes" Feb 24 00:10:41 crc kubenswrapper[4824]: I0224 00:10:41.171829 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" event={"ID":"1c800062-d998-4df3-97e1-ca5df1a57de9","Type":"ContainerStarted","Data":"b96e0735ee158d70b84382ad4a8a1094ebe1136cbc03596e96e9bd01b1c192ba"} Feb 24 00:10:41 crc kubenswrapper[4824]: I0224 00:10:41.171908 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" event={"ID":"1c800062-d998-4df3-97e1-ca5df1a57de9","Type":"ContainerStarted","Data":"0a7a42e872a1ae8acc7406739cad82b1452bea819a0d086289fe5c4eb6a595fd"} Feb 24 00:10:41 crc kubenswrapper[4824]: I0224 00:10:41.175355 4824 generic.go:334] "Generic (PLEG): container finished" podID="2da73289-3f96-4828-a106-46c3b0469e7d" containerID="09e81517976ec38b505938bb2df2f3b6123c4b30e798621cb83825dcef2c35b1" exitCode=0 Feb 24 00:10:41 crc kubenswrapper[4824]: I0224 00:10:41.175426 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl27t" event={"ID":"2da73289-3f96-4828-a106-46c3b0469e7d","Type":"ContainerDied","Data":"09e81517976ec38b505938bb2df2f3b6123c4b30e798621cb83825dcef2c35b1"} Feb 24 00:10:41 crc kubenswrapper[4824]: I0224 00:10:41.176748 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfzkw" event={"ID":"08de7fe0-2d54-408b-8e09-3e1b9bcf931a","Type":"ContainerStarted","Data":"c12d91234669c462b12713951c50eb0c0d50df4a3939e1596a445f3e78f2d6f0"} Feb 24 00:10:41 crc kubenswrapper[4824]: I0224 00:10:41.180145 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzqwf" event={"ID":"b142d96b-87c3-444b-b135-fdddaa658234","Type":"ContainerStarted","Data":"8bd5382363dfe954b11d2958183ea67ba5ab63752a6364c784c6c9e09c7286e0"} Feb 24 00:10:41 crc kubenswrapper[4824]: I0224 00:10:41.980752 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b978c5766-k8r5n"] Feb 24 00:10:41 crc kubenswrapper[4824]: E0224 00:10:41.981450 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ca581b-1f92-4494-ab07-3c56396e862c" containerName="controller-manager" Feb 24 00:10:41 crc kubenswrapper[4824]: I0224 00:10:41.981475 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ca581b-1f92-4494-ab07-3c56396e862c" containerName="controller-manager" Feb 24 00:10:41 crc kubenswrapper[4824]: I0224 00:10:41.981725 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ca581b-1f92-4494-ab07-3c56396e862c" containerName="controller-manager" Feb 24 00:10:41 crc kubenswrapper[4824]: I0224 00:10:41.982345 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:41 crc kubenswrapper[4824]: I0224 00:10:41.985274 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 00:10:41 crc kubenswrapper[4824]: I0224 00:10:41.987032 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 00:10:41 crc kubenswrapper[4824]: I0224 00:10:41.987468 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 00:10:41 crc kubenswrapper[4824]: I0224 00:10:41.987642 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 00:10:41 crc kubenswrapper[4824]: I0224 00:10:41.987828 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 00:10:41 crc kubenswrapper[4824]: I0224 00:10:41.988601 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 00:10:41 crc kubenswrapper[4824]: I0224 00:10:41.995318 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b978c5766-k8r5n"] Feb 24 00:10:41 crc kubenswrapper[4824]: I0224 00:10:41.995964 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.137422 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b7664a-e47f-4e07-b650-093a751f389f-config\") pod \"controller-manager-6b978c5766-k8r5n\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.137600 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48b7664a-e47f-4e07-b650-093a751f389f-proxy-ca-bundles\") pod \"controller-manager-6b978c5766-k8r5n\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.137648 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfx2m\" (UniqueName: \"kubernetes.io/projected/48b7664a-e47f-4e07-b650-093a751f389f-kube-api-access-vfx2m\") pod \"controller-manager-6b978c5766-k8r5n\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.137715 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48b7664a-e47f-4e07-b650-093a751f389f-client-ca\") pod \"controller-manager-6b978c5766-k8r5n\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.137976 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48b7664a-e47f-4e07-b650-093a751f389f-serving-cert\") pod \"controller-manager-6b978c5766-k8r5n\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.188259 4824 generic.go:334] "Generic (PLEG): container finished" podID="08de7fe0-2d54-408b-8e09-3e1b9bcf931a" containerID="c12d91234669c462b12713951c50eb0c0d50df4a3939e1596a445f3e78f2d6f0" exitCode=0 Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.188341 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfzkw" event={"ID":"08de7fe0-2d54-408b-8e09-3e1b9bcf931a","Type":"ContainerDied","Data":"c12d91234669c462b12713951c50eb0c0d50df4a3939e1596a445f3e78f2d6f0"} Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.191308 4824 generic.go:334] "Generic (PLEG): container finished" podID="b142d96b-87c3-444b-b135-fdddaa658234" containerID="8bd5382363dfe954b11d2958183ea67ba5ab63752a6364c784c6c9e09c7286e0" exitCode=0 Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.191425 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzqwf" event={"ID":"b142d96b-87c3-444b-b135-fdddaa658234","Type":"ContainerDied","Data":"8bd5382363dfe954b11d2958183ea67ba5ab63752a6364c784c6c9e09c7286e0"} Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.191588 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.208214 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.239897 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b7664a-e47f-4e07-b650-093a751f389f-config\") pod \"controller-manager-6b978c5766-k8r5n\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.239976 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48b7664a-e47f-4e07-b650-093a751f389f-proxy-ca-bundles\") pod \"controller-manager-6b978c5766-k8r5n\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.240006 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfx2m\" (UniqueName: \"kubernetes.io/projected/48b7664a-e47f-4e07-b650-093a751f389f-kube-api-access-vfx2m\") pod \"controller-manager-6b978c5766-k8r5n\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.240025 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48b7664a-e47f-4e07-b650-093a751f389f-client-ca\") pod \"controller-manager-6b978c5766-k8r5n\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.240062 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48b7664a-e47f-4e07-b650-093a751f389f-serving-cert\") pod \"controller-manager-6b978c5766-k8r5n\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.242371 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48b7664a-e47f-4e07-b650-093a751f389f-client-ca\") pod \"controller-manager-6b978c5766-k8r5n\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.242805 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b7664a-e47f-4e07-b650-093a751f389f-config\") pod \"controller-manager-6b978c5766-k8r5n\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.243031 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48b7664a-e47f-4e07-b650-093a751f389f-proxy-ca-bundles\") pod \"controller-manager-6b978c5766-k8r5n\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.255607 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48b7664a-e47f-4e07-b650-093a751f389f-serving-cert\") pod \"controller-manager-6b978c5766-k8r5n\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.262653 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" podStartSLOduration=5.262613921 podStartE2EDuration="5.262613921s" podCreationTimestamp="2026-02-24 00:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:42.256174706 +0000 UTC m=+306.245799195" watchObservedRunningTime="2026-02-24 00:10:42.262613921 +0000 UTC m=+306.252238400" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.264180 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfx2m\" (UniqueName: \"kubernetes.io/projected/48b7664a-e47f-4e07-b650-093a751f389f-kube-api-access-vfx2m\") pod \"controller-manager-6b978c5766-k8r5n\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:42 crc kubenswrapper[4824]: I0224 00:10:42.347383 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:49 crc kubenswrapper[4824]: I0224 00:10:49.637263 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b978c5766-k8r5n"] Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.023851 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" podUID="6f8699c7-58f5-4a80-b5af-5403cb178676" containerName="oauth-openshift" containerID="cri-o://fc75bfe4b562302aad30993aa2a68489589d802790238c3eeef171430ffcd747" gracePeriod=15 Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.268270 4824 generic.go:334] "Generic (PLEG): container finished" podID="6f8699c7-58f5-4a80-b5af-5403cb178676" containerID="fc75bfe4b562302aad30993aa2a68489589d802790238c3eeef171430ffcd747" exitCode=0 Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.268354 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" event={"ID":"6f8699c7-58f5-4a80-b5af-5403cb178676","Type":"ContainerDied","Data":"fc75bfe4b562302aad30993aa2a68489589d802790238c3eeef171430ffcd747"} Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.270300 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" event={"ID":"48b7664a-e47f-4e07-b650-093a751f389f","Type":"ContainerStarted","Data":"104c6e9bfb9c8cc307927700a237315280842a4940df005e0dfceb2b9121c433"} Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.417538 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.491850 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-template-provider-selection\") pod \"6f8699c7-58f5-4a80-b5af-5403cb178676\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.492173 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-service-ca\") pod \"6f8699c7-58f5-4a80-b5af-5403cb178676\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.492198 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-trusted-ca-bundle\") pod \"6f8699c7-58f5-4a80-b5af-5403cb178676\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.492226 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-serving-cert\") pod \"6f8699c7-58f5-4a80-b5af-5403cb178676\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.492254 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-idp-0-file-data\") pod \"6f8699c7-58f5-4a80-b5af-5403cb178676\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.492270 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-audit-policies\") pod \"6f8699c7-58f5-4a80-b5af-5403cb178676\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.492290 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-session\") pod \"6f8699c7-58f5-4a80-b5af-5403cb178676\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.492313 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-template-login\") pod \"6f8699c7-58f5-4a80-b5af-5403cb178676\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.492337 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-cliconfig\") pod \"6f8699c7-58f5-4a80-b5af-5403cb178676\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.492354 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-router-certs\") pod \"6f8699c7-58f5-4a80-b5af-5403cb178676\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.492376 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-template-error\") pod \"6f8699c7-58f5-4a80-b5af-5403cb178676\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.492399 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4dl5\" (UniqueName: \"kubernetes.io/projected/6f8699c7-58f5-4a80-b5af-5403cb178676-kube-api-access-z4dl5\") pod \"6f8699c7-58f5-4a80-b5af-5403cb178676\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.492420 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-ocp-branding-template\") pod \"6f8699c7-58f5-4a80-b5af-5403cb178676\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.492890 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6f8699c7-58f5-4a80-b5af-5403cb178676" (UID: "6f8699c7-58f5-4a80-b5af-5403cb178676"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.493469 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6f8699c7-58f5-4a80-b5af-5403cb178676-audit-dir\") pod \"6f8699c7-58f5-4a80-b5af-5403cb178676\" (UID: \"6f8699c7-58f5-4a80-b5af-5403cb178676\") " Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.493683 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.493709 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f8699c7-58f5-4a80-b5af-5403cb178676-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "6f8699c7-58f5-4a80-b5af-5403cb178676" (UID: "6f8699c7-58f5-4a80-b5af-5403cb178676"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.494001 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6f8699c7-58f5-4a80-b5af-5403cb178676" (UID: "6f8699c7-58f5-4a80-b5af-5403cb178676"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.497461 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6f8699c7-58f5-4a80-b5af-5403cb178676" (UID: "6f8699c7-58f5-4a80-b5af-5403cb178676"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.498081 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6f8699c7-58f5-4a80-b5af-5403cb178676" (UID: "6f8699c7-58f5-4a80-b5af-5403cb178676"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.503748 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6f8699c7-58f5-4a80-b5af-5403cb178676" (UID: "6f8699c7-58f5-4a80-b5af-5403cb178676"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.505400 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6f8699c7-58f5-4a80-b5af-5403cb178676" (UID: "6f8699c7-58f5-4a80-b5af-5403cb178676"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.505572 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6f8699c7-58f5-4a80-b5af-5403cb178676" (UID: "6f8699c7-58f5-4a80-b5af-5403cb178676"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.505923 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6f8699c7-58f5-4a80-b5af-5403cb178676" (UID: "6f8699c7-58f5-4a80-b5af-5403cb178676"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.505980 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f8699c7-58f5-4a80-b5af-5403cb178676-kube-api-access-z4dl5" (OuterVolumeSpecName: "kube-api-access-z4dl5") pod "6f8699c7-58f5-4a80-b5af-5403cb178676" (UID: "6f8699c7-58f5-4a80-b5af-5403cb178676"). InnerVolumeSpecName "kube-api-access-z4dl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.506963 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6f8699c7-58f5-4a80-b5af-5403cb178676" (UID: "6f8699c7-58f5-4a80-b5af-5403cb178676"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.507700 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6f8699c7-58f5-4a80-b5af-5403cb178676" (UID: "6f8699c7-58f5-4a80-b5af-5403cb178676"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.507944 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6f8699c7-58f5-4a80-b5af-5403cb178676" (UID: "6f8699c7-58f5-4a80-b5af-5403cb178676"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.508716 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6f8699c7-58f5-4a80-b5af-5403cb178676" (UID: "6f8699c7-58f5-4a80-b5af-5403cb178676"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.594312 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.594358 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.594373 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.594388 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4dl5\" (UniqueName: \"kubernetes.io/projected/6f8699c7-58f5-4a80-b5af-5403cb178676-kube-api-access-z4dl5\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.594402 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.594417 4824 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6f8699c7-58f5-4a80-b5af-5403cb178676-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.594432 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.594451 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.594465 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.594479 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.594491 4824 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6f8699c7-58f5-4a80-b5af-5403cb178676-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.594506 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.594535 4824 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6f8699c7-58f5-4a80-b5af-5403cb178676-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.990908 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7484f6b95f-s5j25"] Feb 24 00:10:51 crc kubenswrapper[4824]: E0224 00:10:51.991205 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f8699c7-58f5-4a80-b5af-5403cb178676" containerName="oauth-openshift" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.991226 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f8699c7-58f5-4a80-b5af-5403cb178676" containerName="oauth-openshift" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.991365 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f8699c7-58f5-4a80-b5af-5403cb178676" containerName="oauth-openshift" Feb 24 00:10:51 crc kubenswrapper[4824]: I0224 00:10:51.991922 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.027826 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7484f6b95f-s5j25"] Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.100134 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-router-certs\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.100551 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-user-template-login\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.100585 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.100723 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.100749 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-session\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.100773 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-service-ca\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.100790 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-user-template-error\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.100832 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.100852 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-audit-dir\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.100880 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.100916 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-audit-policies\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.100940 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqddq\" (UniqueName: \"kubernetes.io/projected/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-kube-api-access-vqddq\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.100960 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.101034 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.202138 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.202592 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-router-certs\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.202627 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-user-template-login\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.202665 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.202720 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.202749 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-session\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.202777 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-service-ca\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.202804 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-user-template-error\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.202843 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.202875 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-audit-dir\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.202929 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.202979 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-audit-policies\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.203006 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqddq\" (UniqueName: \"kubernetes.io/projected/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-kube-api-access-vqddq\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.203034 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.203784 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.203871 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-service-ca\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.203880 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-audit-dir\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.204176 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-audit-policies\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.204452 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.209212 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.209753 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.211807 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-user-template-error\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.216097 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-session\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.216412 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-router-certs\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.217113 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.220565 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-user-template-login\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.220614 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.230131 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqddq\" (UniqueName: \"kubernetes.io/projected/070c1558-fa58-45f4-9e1e-e4a7d6e21ee3-kube-api-access-vqddq\") pod \"oauth-openshift-7484f6b95f-s5j25\" (UID: \"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.278448 4824 generic.go:334] "Generic (PLEG): container finished" podID="3e306ddf-071d-47f2-b9b1-bf772963438e" containerID="4336adaefce1f631229f06eda9fede5b34bd7e94028955471812962455639142" exitCode=0 Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.278544 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhftg" event={"ID":"3e306ddf-071d-47f2-b9b1-bf772963438e","Type":"ContainerDied","Data":"4336adaefce1f631229f06eda9fede5b34bd7e94028955471812962455639142"} Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.279883 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" event={"ID":"48b7664a-e47f-4e07-b650-093a751f389f","Type":"ContainerStarted","Data":"0f4168ffce744376cf910cb3ea91a3c95473e9ff3db2ca447e36bdb7404167f5"} Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.280156 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.283453 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl27t" event={"ID":"2da73289-3f96-4828-a106-46c3b0469e7d","Type":"ContainerStarted","Data":"f3129bb41cd26ff02fd1b16661272cba00c8572d524dd0295795e4e681de10f0"} Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.289158 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxplg" event={"ID":"7a78c7d6-6ec6-4857-af87-25c5c8cf961d","Type":"ContainerStarted","Data":"ae3090316a207f659563cb6daa67a1cc4d3c280950cb420d2cf6d0ddebc465d5"} Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.292185 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.292484 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" event={"ID":"6f8699c7-58f5-4a80-b5af-5403cb178676","Type":"ContainerDied","Data":"ac2733fb1a358b53d6cecdc04c18db6dd2ffab884268bd9f970b2082f8018667"} Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.292531 4824 scope.go:117] "RemoveContainer" containerID="fc75bfe4b562302aad30993aa2a68489589d802790238c3eeef171430ffcd747" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.293231 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jf5jw" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.299133 4824 generic.go:334] "Generic (PLEG): container finished" podID="b00860ed-9085-40bb-9041-16eac6d88fb1" containerID="3ea11f300330e655d35201dc1283dc47672fa1f01b0818a96fdef96fb4638961" exitCode=0 Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.299205 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfhcg" event={"ID":"b00860ed-9085-40bb-9041-16eac6d88fb1","Type":"ContainerDied","Data":"3ea11f300330e655d35201dc1283dc47672fa1f01b0818a96fdef96fb4638961"} Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.320279 4824 generic.go:334] "Generic (PLEG): container finished" podID="cc119514-5c95-4925-8a1a-3e6844a34e1e" containerID="49dee3eafa362f85cb1a0da9a797f4ffd8f10b2e2e1b307ecd8002d23b24ca9f" exitCode=0 Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.320365 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmjz7" event={"ID":"cc119514-5c95-4925-8a1a-3e6844a34e1e","Type":"ContainerDied","Data":"49dee3eafa362f85cb1a0da9a797f4ffd8f10b2e2e1b307ecd8002d23b24ca9f"} Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.343493 4824 generic.go:334] "Generic (PLEG): container finished" podID="b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4" containerID="25d34726cfc81bf777c29be29729340267c9cddf9a761fe641537d6b9d860999" exitCode=0 Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.343607 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kgrd" event={"ID":"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4","Type":"ContainerDied","Data":"25d34726cfc81bf777c29be29729340267c9cddf9a761fe641537d6b9d860999"} Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.353647 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" podStartSLOduration=15.353624148 podStartE2EDuration="15.353624148s" podCreationTimestamp="2026-02-24 00:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:52.353101854 +0000 UTC m=+316.342726343" watchObservedRunningTime="2026-02-24 00:10:52.353624148 +0000 UTC m=+316.343248617" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.359900 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfzkw" event={"ID":"08de7fe0-2d54-408b-8e09-3e1b9bcf931a","Type":"ContainerStarted","Data":"508ed9b0876f895bd93f2220b145fd5e2693472af97b22cae0f711fdc293c35c"} Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.400882 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.401229 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzqwf" event={"ID":"b142d96b-87c3-444b-b135-fdddaa658234","Type":"ContainerStarted","Data":"d8cb34947ec733a964cff732c9bb70c2d8c98ea3a605270b5ec9f8c81b631a37"} Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.432216 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gl27t" podStartSLOduration=3.750445131 podStartE2EDuration="1m12.43219018s" podCreationTimestamp="2026-02-24 00:09:40 +0000 UTC" firstStartedPulling="2026-02-24 00:09:42.387644297 +0000 UTC m=+246.377268756" lastFinishedPulling="2026-02-24 00:10:51.069389336 +0000 UTC m=+315.059013805" observedRunningTime="2026-02-24 00:10:52.394935534 +0000 UTC m=+316.384560013" watchObservedRunningTime="2026-02-24 00:10:52.43219018 +0000 UTC m=+316.421814649" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.495060 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mfzkw" podStartSLOduration=3.71584468 podStartE2EDuration="1m13.495023942s" podCreationTimestamp="2026-02-24 00:09:39 +0000 UTC" firstStartedPulling="2026-02-24 00:09:41.316761038 +0000 UTC m=+245.306385517" lastFinishedPulling="2026-02-24 00:10:51.09594031 +0000 UTC m=+315.085564779" observedRunningTime="2026-02-24 00:10:52.471010608 +0000 UTC m=+316.460635087" watchObservedRunningTime="2026-02-24 00:10:52.495023942 +0000 UTC m=+316.484648411" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.495719 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nzqwf" podStartSLOduration=3.669338203 podStartE2EDuration="1m13.495712681s" podCreationTimestamp="2026-02-24 00:09:39 +0000 UTC" firstStartedPulling="2026-02-24 00:09:41.269458639 +0000 UTC m=+245.259083108" lastFinishedPulling="2026-02-24 00:10:51.095833117 +0000 UTC m=+315.085457586" observedRunningTime="2026-02-24 00:10:52.494118237 +0000 UTC m=+316.483742706" watchObservedRunningTime="2026-02-24 00:10:52.495712681 +0000 UTC m=+316.485337150" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.513091 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jf5jw"] Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.520496 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jf5jw"] Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.720307 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f8699c7-58f5-4a80-b5af-5403cb178676" path="/var/lib/kubelet/pods/6f8699c7-58f5-4a80-b5af-5403cb178676/volumes" Feb 24 00:10:52 crc kubenswrapper[4824]: I0224 00:10:52.953128 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7484f6b95f-s5j25"] Feb 24 00:10:52 crc kubenswrapper[4824]: W0224 00:10:52.959180 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod070c1558_fa58_45f4_9e1e_e4a7d6e21ee3.slice/crio-0f3830e47bcb43566b64917b2f2d3c488062533e57fbfa30dfa8c6b847b5dc00 WatchSource:0}: Error finding container 0f3830e47bcb43566b64917b2f2d3c488062533e57fbfa30dfa8c6b847b5dc00: Status 404 returned error can't find the container with id 0f3830e47bcb43566b64917b2f2d3c488062533e57fbfa30dfa8c6b847b5dc00 Feb 24 00:10:53 crc kubenswrapper[4824]: I0224 00:10:53.409241 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfhcg" event={"ID":"b00860ed-9085-40bb-9041-16eac6d88fb1","Type":"ContainerStarted","Data":"740ff9f30206ac3dc9b4d8ae780788f1992408f2e27b01ff6a0bd56b96684f80"} Feb 24 00:10:53 crc kubenswrapper[4824]: I0224 00:10:53.412268 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmjz7" event={"ID":"cc119514-5c95-4925-8a1a-3e6844a34e1e","Type":"ContainerStarted","Data":"ace099e248827a099f2f9964ea688049a17101eb26b429197dc321a43c82c971"} Feb 24 00:10:53 crc kubenswrapper[4824]: I0224 00:10:53.415392 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhftg" event={"ID":"3e306ddf-071d-47f2-b9b1-bf772963438e","Type":"ContainerStarted","Data":"83d0a00bbb287f8c717cb0e93e56c8a769b62fbe8a1114585fcf0819cddb1d85"} Feb 24 00:10:53 crc kubenswrapper[4824]: I0224 00:10:53.417853 4824 generic.go:334] "Generic (PLEG): container finished" podID="7a78c7d6-6ec6-4857-af87-25c5c8cf961d" containerID="ae3090316a207f659563cb6daa67a1cc4d3c280950cb420d2cf6d0ddebc465d5" exitCode=0 Feb 24 00:10:53 crc kubenswrapper[4824]: I0224 00:10:53.417920 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxplg" event={"ID":"7a78c7d6-6ec6-4857-af87-25c5c8cf961d","Type":"ContainerDied","Data":"ae3090316a207f659563cb6daa67a1cc4d3c280950cb420d2cf6d0ddebc465d5"} Feb 24 00:10:53 crc kubenswrapper[4824]: I0224 00:10:53.420615 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" event={"ID":"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3","Type":"ContainerStarted","Data":"0894c00150287c8fef9770d78ff81ad105a8a7bfb8ab9033aea7214f4baad5be"} Feb 24 00:10:53 crc kubenswrapper[4824]: I0224 00:10:53.420660 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" event={"ID":"070c1558-fa58-45f4-9e1e-e4a7d6e21ee3","Type":"ContainerStarted","Data":"0f3830e47bcb43566b64917b2f2d3c488062533e57fbfa30dfa8c6b847b5dc00"} Feb 24 00:10:53 crc kubenswrapper[4824]: I0224 00:10:53.421011 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:53 crc kubenswrapper[4824]: I0224 00:10:53.422429 4824 patch_prober.go:28] interesting pod/oauth-openshift-7484f6b95f-s5j25 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": dial tcp 10.217.0.64:6443: connect: connection refused" start-of-body= Feb 24 00:10:53 crc kubenswrapper[4824]: I0224 00:10:53.422460 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" podUID="070c1558-fa58-45f4-9e1e-e4a7d6e21ee3" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": dial tcp 10.217.0.64:6443: connect: connection refused" Feb 24 00:10:53 crc kubenswrapper[4824]: I0224 00:10:53.424594 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kgrd" event={"ID":"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4","Type":"ContainerStarted","Data":"1d7b69ec3003cc3743308fff410ebd338aa2b0cc906c10d99169f9adff2cb524"} Feb 24 00:10:53 crc kubenswrapper[4824]: I0224 00:10:53.433722 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bfhcg" podStartSLOduration=3.925618017 podStartE2EDuration="1m16.433701666s" podCreationTimestamp="2026-02-24 00:09:37 +0000 UTC" firstStartedPulling="2026-02-24 00:09:40.253409546 +0000 UTC m=+244.243034015" lastFinishedPulling="2026-02-24 00:10:52.761493195 +0000 UTC m=+316.751117664" observedRunningTime="2026-02-24 00:10:53.433054178 +0000 UTC m=+317.422678647" watchObservedRunningTime="2026-02-24 00:10:53.433701666 +0000 UTC m=+317.423326135" Feb 24 00:10:53 crc kubenswrapper[4824]: I0224 00:10:53.454076 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6kgrd" podStartSLOduration=3.746534953 podStartE2EDuration="1m16.454050951s" podCreationTimestamp="2026-02-24 00:09:37 +0000 UTC" firstStartedPulling="2026-02-24 00:09:40.117168759 +0000 UTC m=+244.106793228" lastFinishedPulling="2026-02-24 00:10:52.824684757 +0000 UTC m=+316.814309226" observedRunningTime="2026-02-24 00:10:53.45254931 +0000 UTC m=+317.442173779" watchObservedRunningTime="2026-02-24 00:10:53.454050951 +0000 UTC m=+317.443675420" Feb 24 00:10:53 crc kubenswrapper[4824]: I0224 00:10:53.500822 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hhftg" podStartSLOduration=3.939027639 podStartE2EDuration="1m16.500802575s" podCreationTimestamp="2026-02-24 00:09:37 +0000 UTC" firstStartedPulling="2026-02-24 00:09:40.117052686 +0000 UTC m=+244.106677155" lastFinishedPulling="2026-02-24 00:10:52.678827622 +0000 UTC m=+316.668452091" observedRunningTime="2026-02-24 00:10:53.499603902 +0000 UTC m=+317.489228381" watchObservedRunningTime="2026-02-24 00:10:53.500802575 +0000 UTC m=+317.490427044" Feb 24 00:10:53 crc kubenswrapper[4824]: I0224 00:10:53.525387 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dmjz7" podStartSLOduration=3.797124815 podStartE2EDuration="1m16.525370544s" podCreationTimestamp="2026-02-24 00:09:37 +0000 UTC" firstStartedPulling="2026-02-24 00:09:40.225108855 +0000 UTC m=+244.214733334" lastFinishedPulling="2026-02-24 00:10:52.953354594 +0000 UTC m=+316.942979063" observedRunningTime="2026-02-24 00:10:53.523486863 +0000 UTC m=+317.513111342" watchObservedRunningTime="2026-02-24 00:10:53.525370544 +0000 UTC m=+317.514995013" Feb 24 00:10:53 crc kubenswrapper[4824]: I0224 00:10:53.549168 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" podStartSLOduration=27.549142232 podStartE2EDuration="27.549142232s" podCreationTimestamp="2026-02-24 00:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:53.547095266 +0000 UTC m=+317.536719755" watchObservedRunningTime="2026-02-24 00:10:53.549142232 +0000 UTC m=+317.538766701" Feb 24 00:10:54 crc kubenswrapper[4824]: I0224 00:10:54.433924 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxplg" event={"ID":"7a78c7d6-6ec6-4857-af87-25c5c8cf961d","Type":"ContainerStarted","Data":"da0889e558f40879c1fabc025b056771135a794508cda04386b8dc975f871518"} Feb 24 00:10:54 crc kubenswrapper[4824]: I0224 00:10:54.447940 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7484f6b95f-s5j25" Feb 24 00:10:54 crc kubenswrapper[4824]: I0224 00:10:54.466786 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zxplg" podStartSLOduration=3.015280006 podStartE2EDuration="1m14.466765092s" podCreationTimestamp="2026-02-24 00:09:40 +0000 UTC" firstStartedPulling="2026-02-24 00:09:42.393581022 +0000 UTC m=+246.383205491" lastFinishedPulling="2026-02-24 00:10:53.845066108 +0000 UTC m=+317.834690577" observedRunningTime="2026-02-24 00:10:54.463177844 +0000 UTC m=+318.452802313" watchObservedRunningTime="2026-02-24 00:10:54.466765092 +0000 UTC m=+318.456389561" Feb 24 00:10:57 crc kubenswrapper[4824]: I0224 00:10:57.250270 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b978c5766-k8r5n"] Feb 24 00:10:57 crc kubenswrapper[4824]: I0224 00:10:57.250577 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" podUID="48b7664a-e47f-4e07-b650-093a751f389f" containerName="controller-manager" containerID="cri-o://0f4168ffce744376cf910cb3ea91a3c95473e9ff3db2ca447e36bdb7404167f5" gracePeriod=30 Feb 24 00:10:57 crc kubenswrapper[4824]: I0224 00:10:57.258902 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8"] Feb 24 00:10:57 crc kubenswrapper[4824]: I0224 00:10:57.259122 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" podUID="1c800062-d998-4df3-97e1-ca5df1a57de9" containerName="route-controller-manager" containerID="cri-o://b96e0735ee158d70b84382ad4a8a1094ebe1136cbc03596e96e9bd01b1c192ba" gracePeriod=30 Feb 24 00:10:57 crc kubenswrapper[4824]: I0224 00:10:57.920655 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dmjz7" Feb 24 00:10:57 crc kubenswrapper[4824]: I0224 00:10:57.920720 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dmjz7" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.150120 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hhftg" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.150207 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hhftg" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.272547 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bfhcg" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.272601 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bfhcg" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.372756 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6kgrd" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.372836 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6kgrd" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.438252 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6kgrd" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.438355 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bfhcg" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.439116 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dmjz7" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.439496 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hhftg" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.459938 4824 generic.go:334] "Generic (PLEG): container finished" podID="1c800062-d998-4df3-97e1-ca5df1a57de9" containerID="b96e0735ee158d70b84382ad4a8a1094ebe1136cbc03596e96e9bd01b1c192ba" exitCode=0 Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.460179 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" event={"ID":"1c800062-d998-4df3-97e1-ca5df1a57de9","Type":"ContainerDied","Data":"b96e0735ee158d70b84382ad4a8a1094ebe1136cbc03596e96e9bd01b1c192ba"} Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.462565 4824 generic.go:334] "Generic (PLEG): container finished" podID="48b7664a-e47f-4e07-b650-093a751f389f" containerID="0f4168ffce744376cf910cb3ea91a3c95473e9ff3db2ca447e36bdb7404167f5" exitCode=0 Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.463420 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" event={"ID":"48b7664a-e47f-4e07-b650-093a751f389f","Type":"ContainerDied","Data":"0f4168ffce744376cf910cb3ea91a3c95473e9ff3db2ca447e36bdb7404167f5"} Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.523689 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6kgrd" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.532694 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bfhcg" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.543713 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hhftg" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.544086 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dmjz7" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.561269 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.641318 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx"] Feb 24 00:10:58 crc kubenswrapper[4824]: E0224 00:10:58.642306 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c800062-d998-4df3-97e1-ca5df1a57de9" containerName="route-controller-manager" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.642393 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c800062-d998-4df3-97e1-ca5df1a57de9" containerName="route-controller-manager" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.642596 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c800062-d998-4df3-97e1-ca5df1a57de9" containerName="route-controller-manager" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.643080 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.667957 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx"] Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.709276 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c800062-d998-4df3-97e1-ca5df1a57de9-serving-cert\") pod \"1c800062-d998-4df3-97e1-ca5df1a57de9\" (UID: \"1c800062-d998-4df3-97e1-ca5df1a57de9\") " Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.709344 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c800062-d998-4df3-97e1-ca5df1a57de9-config\") pod \"1c800062-d998-4df3-97e1-ca5df1a57de9\" (UID: \"1c800062-d998-4df3-97e1-ca5df1a57de9\") " Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.709420 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45tz4\" (UniqueName: \"kubernetes.io/projected/1c800062-d998-4df3-97e1-ca5df1a57de9-kube-api-access-45tz4\") pod \"1c800062-d998-4df3-97e1-ca5df1a57de9\" (UID: \"1c800062-d998-4df3-97e1-ca5df1a57de9\") " Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.709476 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c800062-d998-4df3-97e1-ca5df1a57de9-client-ca\") pod \"1c800062-d998-4df3-97e1-ca5df1a57de9\" (UID: \"1c800062-d998-4df3-97e1-ca5df1a57de9\") " Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.710507 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c800062-d998-4df3-97e1-ca5df1a57de9-client-ca" (OuterVolumeSpecName: "client-ca") pod "1c800062-d998-4df3-97e1-ca5df1a57de9" (UID: "1c800062-d998-4df3-97e1-ca5df1a57de9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.710588 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c800062-d998-4df3-97e1-ca5df1a57de9-config" (OuterVolumeSpecName: "config") pod "1c800062-d998-4df3-97e1-ca5df1a57de9" (UID: "1c800062-d998-4df3-97e1-ca5df1a57de9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.718639 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c800062-d998-4df3-97e1-ca5df1a57de9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1c800062-d998-4df3-97e1-ca5df1a57de9" (UID: "1c800062-d998-4df3-97e1-ca5df1a57de9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.720727 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c800062-d998-4df3-97e1-ca5df1a57de9-kube-api-access-45tz4" (OuterVolumeSpecName: "kube-api-access-45tz4") pod "1c800062-d998-4df3-97e1-ca5df1a57de9" (UID: "1c800062-d998-4df3-97e1-ca5df1a57de9"). InnerVolumeSpecName "kube-api-access-45tz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.810588 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf4mw\" (UniqueName: \"kubernetes.io/projected/4fb39704-7b42-4cdb-8b97-a410aee2e71d-kube-api-access-tf4mw\") pod \"route-controller-manager-7646d7595b-xmkcx\" (UID: \"4fb39704-7b42-4cdb-8b97-a410aee2e71d\") " pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.810774 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fb39704-7b42-4cdb-8b97-a410aee2e71d-config\") pod \"route-controller-manager-7646d7595b-xmkcx\" (UID: \"4fb39704-7b42-4cdb-8b97-a410aee2e71d\") " pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.810833 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fb39704-7b42-4cdb-8b97-a410aee2e71d-client-ca\") pod \"route-controller-manager-7646d7595b-xmkcx\" (UID: \"4fb39704-7b42-4cdb-8b97-a410aee2e71d\") " pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.810901 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fb39704-7b42-4cdb-8b97-a410aee2e71d-serving-cert\") pod \"route-controller-manager-7646d7595b-xmkcx\" (UID: \"4fb39704-7b42-4cdb-8b97-a410aee2e71d\") " pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.811134 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45tz4\" (UniqueName: \"kubernetes.io/projected/1c800062-d998-4df3-97e1-ca5df1a57de9-kube-api-access-45tz4\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.811191 4824 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c800062-d998-4df3-97e1-ca5df1a57de9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.811202 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c800062-d998-4df3-97e1-ca5df1a57de9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.811213 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c800062-d998-4df3-97e1-ca5df1a57de9-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.912284 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fb39704-7b42-4cdb-8b97-a410aee2e71d-config\") pod \"route-controller-manager-7646d7595b-xmkcx\" (UID: \"4fb39704-7b42-4cdb-8b97-a410aee2e71d\") " pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.912434 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fb39704-7b42-4cdb-8b97-a410aee2e71d-client-ca\") pod \"route-controller-manager-7646d7595b-xmkcx\" (UID: \"4fb39704-7b42-4cdb-8b97-a410aee2e71d\") " pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.913606 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fb39704-7b42-4cdb-8b97-a410aee2e71d-client-ca\") pod \"route-controller-manager-7646d7595b-xmkcx\" (UID: \"4fb39704-7b42-4cdb-8b97-a410aee2e71d\") " pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.913683 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fb39704-7b42-4cdb-8b97-a410aee2e71d-config\") pod \"route-controller-manager-7646d7595b-xmkcx\" (UID: \"4fb39704-7b42-4cdb-8b97-a410aee2e71d\") " pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.913707 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fb39704-7b42-4cdb-8b97-a410aee2e71d-serving-cert\") pod \"route-controller-manager-7646d7595b-xmkcx\" (UID: \"4fb39704-7b42-4cdb-8b97-a410aee2e71d\") " pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.914324 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf4mw\" (UniqueName: \"kubernetes.io/projected/4fb39704-7b42-4cdb-8b97-a410aee2e71d-kube-api-access-tf4mw\") pod \"route-controller-manager-7646d7595b-xmkcx\" (UID: \"4fb39704-7b42-4cdb-8b97-a410aee2e71d\") " pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.917488 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fb39704-7b42-4cdb-8b97-a410aee2e71d-serving-cert\") pod \"route-controller-manager-7646d7595b-xmkcx\" (UID: \"4fb39704-7b42-4cdb-8b97-a410aee2e71d\") " pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.936536 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf4mw\" (UniqueName: \"kubernetes.io/projected/4fb39704-7b42-4cdb-8b97-a410aee2e71d-kube-api-access-tf4mw\") pod \"route-controller-manager-7646d7595b-xmkcx\" (UID: \"4fb39704-7b42-4cdb-8b97-a410aee2e71d\") " pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" Feb 24 00:10:58 crc kubenswrapper[4824]: I0224 00:10:58.970700 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.304654 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.421981 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b7664a-e47f-4e07-b650-093a751f389f-config\") pod \"48b7664a-e47f-4e07-b650-093a751f389f\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.422052 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfx2m\" (UniqueName: \"kubernetes.io/projected/48b7664a-e47f-4e07-b650-093a751f389f-kube-api-access-vfx2m\") pod \"48b7664a-e47f-4e07-b650-093a751f389f\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.422077 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48b7664a-e47f-4e07-b650-093a751f389f-client-ca\") pod \"48b7664a-e47f-4e07-b650-093a751f389f\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.422154 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48b7664a-e47f-4e07-b650-093a751f389f-serving-cert\") pod \"48b7664a-e47f-4e07-b650-093a751f389f\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.422199 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48b7664a-e47f-4e07-b650-093a751f389f-proxy-ca-bundles\") pod \"48b7664a-e47f-4e07-b650-093a751f389f\" (UID: \"48b7664a-e47f-4e07-b650-093a751f389f\") " Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.423369 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48b7664a-e47f-4e07-b650-093a751f389f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "48b7664a-e47f-4e07-b650-093a751f389f" (UID: "48b7664a-e47f-4e07-b650-093a751f389f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.423578 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48b7664a-e47f-4e07-b650-093a751f389f-client-ca" (OuterVolumeSpecName: "client-ca") pod "48b7664a-e47f-4e07-b650-093a751f389f" (UID: "48b7664a-e47f-4e07-b650-093a751f389f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.424577 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48b7664a-e47f-4e07-b650-093a751f389f-config" (OuterVolumeSpecName: "config") pod "48b7664a-e47f-4e07-b650-093a751f389f" (UID: "48b7664a-e47f-4e07-b650-093a751f389f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.426148 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48b7664a-e47f-4e07-b650-093a751f389f-kube-api-access-vfx2m" (OuterVolumeSpecName: "kube-api-access-vfx2m") pod "48b7664a-e47f-4e07-b650-093a751f389f" (UID: "48b7664a-e47f-4e07-b650-093a751f389f"). InnerVolumeSpecName "kube-api-access-vfx2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.426869 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48b7664a-e47f-4e07-b650-093a751f389f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "48b7664a-e47f-4e07-b650-093a751f389f" (UID: "48b7664a-e47f-4e07-b650-093a751f389f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.463954 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx"] Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.471296 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.472197 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b978c5766-k8r5n" event={"ID":"48b7664a-e47f-4e07-b650-093a751f389f","Type":"ContainerDied","Data":"104c6e9bfb9c8cc307927700a237315280842a4940df005e0dfceb2b9121c433"} Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.472258 4824 scope.go:117] "RemoveContainer" containerID="0f4168ffce744376cf910cb3ea91a3c95473e9ff3db2ca447e36bdb7404167f5" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.475392 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" event={"ID":"1c800062-d998-4df3-97e1-ca5df1a57de9","Type":"ContainerDied","Data":"0a7a42e872a1ae8acc7406739cad82b1452bea819a0d086289fe5c4eb6a595fd"} Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.475725 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.494134 4824 scope.go:117] "RemoveContainer" containerID="b96e0735ee158d70b84382ad4a8a1094ebe1136cbc03596e96e9bd01b1c192ba" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.511215 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b978c5766-k8r5n"] Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.515206 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b978c5766-k8r5n"] Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.523997 4824 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b7664a-e47f-4e07-b650-093a751f389f-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.524036 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfx2m\" (UniqueName: \"kubernetes.io/projected/48b7664a-e47f-4e07-b650-093a751f389f-kube-api-access-vfx2m\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.524046 4824 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48b7664a-e47f-4e07-b650-093a751f389f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.524056 4824 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48b7664a-e47f-4e07-b650-093a751f389f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.524064 4824 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48b7664a-e47f-4e07-b650-093a751f389f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.527113 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8"] Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.530766 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f4db755ff-dfrn8"] Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.664310 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nzqwf" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.664387 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nzqwf" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.709216 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nzqwf" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.942564 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mfzkw" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.942945 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mfzkw" Feb 24 00:10:59 crc kubenswrapper[4824]: I0224 00:10:59.984587 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mfzkw" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.483364 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" event={"ID":"4fb39704-7b42-4cdb-8b97-a410aee2e71d","Type":"ContainerStarted","Data":"a2952b5305c41a1d3a3a66df07e041f46c5d7f0968959a3cd17d893e4f11da6f"} Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.483415 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" event={"ID":"4fb39704-7b42-4cdb-8b97-a410aee2e71d","Type":"ContainerStarted","Data":"882c1288239b83f0062521860a302214b25cf57db5ec2c3d59d4f89576f3241c"} Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.483543 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.490059 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.500484 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7646d7595b-xmkcx" podStartSLOduration=3.500462959 podStartE2EDuration="3.500462959s" podCreationTimestamp="2026-02-24 00:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:11:00.499916554 +0000 UTC m=+324.489541033" watchObservedRunningTime="2026-02-24 00:11:00.500462959 +0000 UTC m=+324.490087428" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.523935 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6kgrd"] Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.524182 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6kgrd" podUID="b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4" containerName="registry-server" containerID="cri-o://1d7b69ec3003cc3743308fff410ebd338aa2b0cc906c10d99169f9adff2cb524" gracePeriod=2 Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.533849 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nzqwf" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.533981 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mfzkw" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.705021 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c800062-d998-4df3-97e1-ca5df1a57de9" path="/var/lib/kubelet/pods/1c800062-d998-4df3-97e1-ca5df1a57de9/volumes" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.705600 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48b7664a-e47f-4e07-b650-093a751f389f" path="/var/lib/kubelet/pods/48b7664a-e47f-4e07-b650-093a751f389f/volumes" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.722274 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bfhcg"] Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.722574 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bfhcg" podUID="b00860ed-9085-40bb-9041-16eac6d88fb1" containerName="registry-server" containerID="cri-o://740ff9f30206ac3dc9b4d8ae780788f1992408f2e27b01ff6a0bd56b96684f80" gracePeriod=2 Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.802846 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zxplg" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.802900 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zxplg" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.844020 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zxplg" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.930923 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kgrd" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.995054 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-748b548fc5-vnmjp"] Feb 24 00:11:00 crc kubenswrapper[4824]: E0224 00:11:00.995339 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b7664a-e47f-4e07-b650-093a751f389f" containerName="controller-manager" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.995359 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b7664a-e47f-4e07-b650-093a751f389f" containerName="controller-manager" Feb 24 00:11:00 crc kubenswrapper[4824]: E0224 00:11:00.995378 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4" containerName="registry-server" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.995387 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4" containerName="registry-server" Feb 24 00:11:00 crc kubenswrapper[4824]: E0224 00:11:00.995404 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4" containerName="extract-content" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.995413 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4" containerName="extract-content" Feb 24 00:11:00 crc kubenswrapper[4824]: E0224 00:11:00.995429 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4" containerName="extract-utilities" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.995437 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4" containerName="extract-utilities" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.995983 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="48b7664a-e47f-4e07-b650-093a751f389f" containerName="controller-manager" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.996010 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4" containerName="registry-server" Feb 24 00:11:00 crc kubenswrapper[4824]: I0224 00:11:00.996578 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:00.998595 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:00.998800 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.001805 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.002763 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.007331 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-748b548fc5-vnmjp"] Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.007670 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.020321 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.025478 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.045190 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4-catalog-content\") pod \"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4\" (UID: \"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4\") " Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.045239 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c94kl\" (UniqueName: \"kubernetes.io/projected/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4-kube-api-access-c94kl\") pod \"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4\" (UID: \"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4\") " Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.045279 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4-utilities\") pod \"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4\" (UID: \"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4\") " Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.046189 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4-utilities" (OuterVolumeSpecName: "utilities") pod "b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4" (UID: "b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.055888 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4-kube-api-access-c94kl" (OuterVolumeSpecName: "kube-api-access-c94kl") pod "b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4" (UID: "b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4"). InnerVolumeSpecName "kube-api-access-c94kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.101380 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4" (UID: "b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.132084 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfhcg" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.146929 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4767901-4638-4ac8-9c1e-7e61341ddc21-proxy-ca-bundles\") pod \"controller-manager-748b548fc5-vnmjp\" (UID: \"d4767901-4638-4ac8-9c1e-7e61341ddc21\") " pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.146993 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb8jx\" (UniqueName: \"kubernetes.io/projected/d4767901-4638-4ac8-9c1e-7e61341ddc21-kube-api-access-mb8jx\") pod \"controller-manager-748b548fc5-vnmjp\" (UID: \"d4767901-4638-4ac8-9c1e-7e61341ddc21\") " pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.147189 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4767901-4638-4ac8-9c1e-7e61341ddc21-config\") pod \"controller-manager-748b548fc5-vnmjp\" (UID: \"d4767901-4638-4ac8-9c1e-7e61341ddc21\") " pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.147311 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4767901-4638-4ac8-9c1e-7e61341ddc21-client-ca\") pod \"controller-manager-748b548fc5-vnmjp\" (UID: \"d4767901-4638-4ac8-9c1e-7e61341ddc21\") " pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.147400 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4767901-4638-4ac8-9c1e-7e61341ddc21-serving-cert\") pod \"controller-manager-748b548fc5-vnmjp\" (UID: \"d4767901-4638-4ac8-9c1e-7e61341ddc21\") " pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.147560 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.148059 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c94kl\" (UniqueName: \"kubernetes.io/projected/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4-kube-api-access-c94kl\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.148076 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.249731 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t895b\" (UniqueName: \"kubernetes.io/projected/b00860ed-9085-40bb-9041-16eac6d88fb1-kube-api-access-t895b\") pod \"b00860ed-9085-40bb-9041-16eac6d88fb1\" (UID: \"b00860ed-9085-40bb-9041-16eac6d88fb1\") " Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.250409 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b00860ed-9085-40bb-9041-16eac6d88fb1-catalog-content\") pod \"b00860ed-9085-40bb-9041-16eac6d88fb1\" (UID: \"b00860ed-9085-40bb-9041-16eac6d88fb1\") " Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.250803 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b00860ed-9085-40bb-9041-16eac6d88fb1-utilities\") pod \"b00860ed-9085-40bb-9041-16eac6d88fb1\" (UID: \"b00860ed-9085-40bb-9041-16eac6d88fb1\") " Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.251303 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4767901-4638-4ac8-9c1e-7e61341ddc21-proxy-ca-bundles\") pod \"controller-manager-748b548fc5-vnmjp\" (UID: \"d4767901-4638-4ac8-9c1e-7e61341ddc21\") " pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.251424 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb8jx\" (UniqueName: \"kubernetes.io/projected/d4767901-4638-4ac8-9c1e-7e61341ddc21-kube-api-access-mb8jx\") pod \"controller-manager-748b548fc5-vnmjp\" (UID: \"d4767901-4638-4ac8-9c1e-7e61341ddc21\") " pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.251570 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4767901-4638-4ac8-9c1e-7e61341ddc21-config\") pod \"controller-manager-748b548fc5-vnmjp\" (UID: \"d4767901-4638-4ac8-9c1e-7e61341ddc21\") " pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.251726 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4767901-4638-4ac8-9c1e-7e61341ddc21-client-ca\") pod \"controller-manager-748b548fc5-vnmjp\" (UID: \"d4767901-4638-4ac8-9c1e-7e61341ddc21\") " pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.251844 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4767901-4638-4ac8-9c1e-7e61341ddc21-serving-cert\") pod \"controller-manager-748b548fc5-vnmjp\" (UID: \"d4767901-4638-4ac8-9c1e-7e61341ddc21\") " pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.251862 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b00860ed-9085-40bb-9041-16eac6d88fb1-utilities" (OuterVolumeSpecName: "utilities") pod "b00860ed-9085-40bb-9041-16eac6d88fb1" (UID: "b00860ed-9085-40bb-9041-16eac6d88fb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.253014 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4767901-4638-4ac8-9c1e-7e61341ddc21-client-ca\") pod \"controller-manager-748b548fc5-vnmjp\" (UID: \"d4767901-4638-4ac8-9c1e-7e61341ddc21\") " pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.253102 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4767901-4638-4ac8-9c1e-7e61341ddc21-proxy-ca-bundles\") pod \"controller-manager-748b548fc5-vnmjp\" (UID: \"d4767901-4638-4ac8-9c1e-7e61341ddc21\") " pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.253734 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4767901-4638-4ac8-9c1e-7e61341ddc21-config\") pod \"controller-manager-748b548fc5-vnmjp\" (UID: \"d4767901-4638-4ac8-9c1e-7e61341ddc21\") " pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.257830 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b00860ed-9085-40bb-9041-16eac6d88fb1-kube-api-access-t895b" (OuterVolumeSpecName: "kube-api-access-t895b") pod "b00860ed-9085-40bb-9041-16eac6d88fb1" (UID: "b00860ed-9085-40bb-9041-16eac6d88fb1"). InnerVolumeSpecName "kube-api-access-t895b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.258628 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4767901-4638-4ac8-9c1e-7e61341ddc21-serving-cert\") pod \"controller-manager-748b548fc5-vnmjp\" (UID: \"d4767901-4638-4ac8-9c1e-7e61341ddc21\") " pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.270703 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb8jx\" (UniqueName: \"kubernetes.io/projected/d4767901-4638-4ac8-9c1e-7e61341ddc21-kube-api-access-mb8jx\") pod \"controller-manager-748b548fc5-vnmjp\" (UID: \"d4767901-4638-4ac8-9c1e-7e61341ddc21\") " pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.307463 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b00860ed-9085-40bb-9041-16eac6d88fb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b00860ed-9085-40bb-9041-16eac6d88fb1" (UID: "b00860ed-9085-40bb-9041-16eac6d88fb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.345687 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.352894 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b00860ed-9085-40bb-9041-16eac6d88fb1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.352952 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b00860ed-9085-40bb-9041-16eac6d88fb1-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.352962 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t895b\" (UniqueName: \"kubernetes.io/projected/b00860ed-9085-40bb-9041-16eac6d88fb1-kube-api-access-t895b\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.415372 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gl27t" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.415436 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gl27t" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.504584 4824 generic.go:334] "Generic (PLEG): container finished" podID="b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4" containerID="1d7b69ec3003cc3743308fff410ebd338aa2b0cc906c10d99169f9adff2cb524" exitCode=0 Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.504678 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kgrd" event={"ID":"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4","Type":"ContainerDied","Data":"1d7b69ec3003cc3743308fff410ebd338aa2b0cc906c10d99169f9adff2cb524"} Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.505040 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6kgrd" event={"ID":"b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4","Type":"ContainerDied","Data":"a33e62fe6f2549eb1208d3cf356835348bf6325f74507046a34d8f566aaa9f3c"} Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.505067 4824 scope.go:117] "RemoveContainer" containerID="1d7b69ec3003cc3743308fff410ebd338aa2b0cc906c10d99169f9adff2cb524" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.504710 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6kgrd" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.511373 4824 generic.go:334] "Generic (PLEG): container finished" podID="b00860ed-9085-40bb-9041-16eac6d88fb1" containerID="740ff9f30206ac3dc9b4d8ae780788f1992408f2e27b01ff6a0bd56b96684f80" exitCode=0 Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.511485 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bfhcg" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.511608 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfhcg" event={"ID":"b00860ed-9085-40bb-9041-16eac6d88fb1","Type":"ContainerDied","Data":"740ff9f30206ac3dc9b4d8ae780788f1992408f2e27b01ff6a0bd56b96684f80"} Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.511651 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bfhcg" event={"ID":"b00860ed-9085-40bb-9041-16eac6d88fb1","Type":"ContainerDied","Data":"9c018af5a403cd93073513dceb16d5cd69816c1726faff3f2cab188f2753d464"} Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.519675 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gl27t" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.538647 4824 scope.go:117] "RemoveContainer" containerID="25d34726cfc81bf777c29be29729340267c9cddf9a761fe641537d6b9d860999" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.579479 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bfhcg"] Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.582492 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gl27t" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.582620 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zxplg" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.592787 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bfhcg"] Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.599783 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6kgrd"] Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.601652 4824 scope.go:117] "RemoveContainer" containerID="8bc4c4e74760164a686d72949257ade00d0c3cfa06f89e62b82604ab57ead475" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.603917 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6kgrd"] Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.629388 4824 scope.go:117] "RemoveContainer" containerID="1d7b69ec3003cc3743308fff410ebd338aa2b0cc906c10d99169f9adff2cb524" Feb 24 00:11:01 crc kubenswrapper[4824]: E0224 00:11:01.630153 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d7b69ec3003cc3743308fff410ebd338aa2b0cc906c10d99169f9adff2cb524\": container with ID starting with 1d7b69ec3003cc3743308fff410ebd338aa2b0cc906c10d99169f9adff2cb524 not found: ID does not exist" containerID="1d7b69ec3003cc3743308fff410ebd338aa2b0cc906c10d99169f9adff2cb524" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.630187 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d7b69ec3003cc3743308fff410ebd338aa2b0cc906c10d99169f9adff2cb524"} err="failed to get container status \"1d7b69ec3003cc3743308fff410ebd338aa2b0cc906c10d99169f9adff2cb524\": rpc error: code = NotFound desc = could not find container \"1d7b69ec3003cc3743308fff410ebd338aa2b0cc906c10d99169f9adff2cb524\": container with ID starting with 1d7b69ec3003cc3743308fff410ebd338aa2b0cc906c10d99169f9adff2cb524 not found: ID does not exist" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.630212 4824 scope.go:117] "RemoveContainer" containerID="25d34726cfc81bf777c29be29729340267c9cddf9a761fe641537d6b9d860999" Feb 24 00:11:01 crc kubenswrapper[4824]: E0224 00:11:01.630612 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25d34726cfc81bf777c29be29729340267c9cddf9a761fe641537d6b9d860999\": container with ID starting with 25d34726cfc81bf777c29be29729340267c9cddf9a761fe641537d6b9d860999 not found: ID does not exist" containerID="25d34726cfc81bf777c29be29729340267c9cddf9a761fe641537d6b9d860999" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.630665 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25d34726cfc81bf777c29be29729340267c9cddf9a761fe641537d6b9d860999"} err="failed to get container status \"25d34726cfc81bf777c29be29729340267c9cddf9a761fe641537d6b9d860999\": rpc error: code = NotFound desc = could not find container \"25d34726cfc81bf777c29be29729340267c9cddf9a761fe641537d6b9d860999\": container with ID starting with 25d34726cfc81bf777c29be29729340267c9cddf9a761fe641537d6b9d860999 not found: ID does not exist" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.630699 4824 scope.go:117] "RemoveContainer" containerID="8bc4c4e74760164a686d72949257ade00d0c3cfa06f89e62b82604ab57ead475" Feb 24 00:11:01 crc kubenswrapper[4824]: E0224 00:11:01.631113 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bc4c4e74760164a686d72949257ade00d0c3cfa06f89e62b82604ab57ead475\": container with ID starting with 8bc4c4e74760164a686d72949257ade00d0c3cfa06f89e62b82604ab57ead475 not found: ID does not exist" containerID="8bc4c4e74760164a686d72949257ade00d0c3cfa06f89e62b82604ab57ead475" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.631141 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bc4c4e74760164a686d72949257ade00d0c3cfa06f89e62b82604ab57ead475"} err="failed to get container status \"8bc4c4e74760164a686d72949257ade00d0c3cfa06f89e62b82604ab57ead475\": rpc error: code = NotFound desc = could not find container \"8bc4c4e74760164a686d72949257ade00d0c3cfa06f89e62b82604ab57ead475\": container with ID starting with 8bc4c4e74760164a686d72949257ade00d0c3cfa06f89e62b82604ab57ead475 not found: ID does not exist" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.631157 4824 scope.go:117] "RemoveContainer" containerID="740ff9f30206ac3dc9b4d8ae780788f1992408f2e27b01ff6a0bd56b96684f80" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.652339 4824 scope.go:117] "RemoveContainer" containerID="3ea11f300330e655d35201dc1283dc47672fa1f01b0818a96fdef96fb4638961" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.692314 4824 scope.go:117] "RemoveContainer" containerID="b1c2dccdb4d30a9f6f73707633f7236dc1482d098520afb3ff0820f1bf1efcba" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.708677 4824 scope.go:117] "RemoveContainer" containerID="740ff9f30206ac3dc9b4d8ae780788f1992408f2e27b01ff6a0bd56b96684f80" Feb 24 00:11:01 crc kubenswrapper[4824]: E0224 00:11:01.709317 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"740ff9f30206ac3dc9b4d8ae780788f1992408f2e27b01ff6a0bd56b96684f80\": container with ID starting with 740ff9f30206ac3dc9b4d8ae780788f1992408f2e27b01ff6a0bd56b96684f80 not found: ID does not exist" containerID="740ff9f30206ac3dc9b4d8ae780788f1992408f2e27b01ff6a0bd56b96684f80" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.709361 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740ff9f30206ac3dc9b4d8ae780788f1992408f2e27b01ff6a0bd56b96684f80"} err="failed to get container status \"740ff9f30206ac3dc9b4d8ae780788f1992408f2e27b01ff6a0bd56b96684f80\": rpc error: code = NotFound desc = could not find container \"740ff9f30206ac3dc9b4d8ae780788f1992408f2e27b01ff6a0bd56b96684f80\": container with ID starting with 740ff9f30206ac3dc9b4d8ae780788f1992408f2e27b01ff6a0bd56b96684f80 not found: ID does not exist" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.709393 4824 scope.go:117] "RemoveContainer" containerID="3ea11f300330e655d35201dc1283dc47672fa1f01b0818a96fdef96fb4638961" Feb 24 00:11:01 crc kubenswrapper[4824]: E0224 00:11:01.709705 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ea11f300330e655d35201dc1283dc47672fa1f01b0818a96fdef96fb4638961\": container with ID starting with 3ea11f300330e655d35201dc1283dc47672fa1f01b0818a96fdef96fb4638961 not found: ID does not exist" containerID="3ea11f300330e655d35201dc1283dc47672fa1f01b0818a96fdef96fb4638961" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.709729 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea11f300330e655d35201dc1283dc47672fa1f01b0818a96fdef96fb4638961"} err="failed to get container status \"3ea11f300330e655d35201dc1283dc47672fa1f01b0818a96fdef96fb4638961\": rpc error: code = NotFound desc = could not find container \"3ea11f300330e655d35201dc1283dc47672fa1f01b0818a96fdef96fb4638961\": container with ID starting with 3ea11f300330e655d35201dc1283dc47672fa1f01b0818a96fdef96fb4638961 not found: ID does not exist" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.709750 4824 scope.go:117] "RemoveContainer" containerID="b1c2dccdb4d30a9f6f73707633f7236dc1482d098520afb3ff0820f1bf1efcba" Feb 24 00:11:01 crc kubenswrapper[4824]: E0224 00:11:01.710270 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1c2dccdb4d30a9f6f73707633f7236dc1482d098520afb3ff0820f1bf1efcba\": container with ID starting with b1c2dccdb4d30a9f6f73707633f7236dc1482d098520afb3ff0820f1bf1efcba not found: ID does not exist" containerID="b1c2dccdb4d30a9f6f73707633f7236dc1482d098520afb3ff0820f1bf1efcba" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.710302 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1c2dccdb4d30a9f6f73707633f7236dc1482d098520afb3ff0820f1bf1efcba"} err="failed to get container status \"b1c2dccdb4d30a9f6f73707633f7236dc1482d098520afb3ff0820f1bf1efcba\": rpc error: code = NotFound desc = could not find container \"b1c2dccdb4d30a9f6f73707633f7236dc1482d098520afb3ff0820f1bf1efcba\": container with ID starting with b1c2dccdb4d30a9f6f73707633f7236dc1482d098520afb3ff0820f1bf1efcba not found: ID does not exist" Feb 24 00:11:01 crc kubenswrapper[4824]: I0224 00:11:01.831037 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-748b548fc5-vnmjp"] Feb 24 00:11:01 crc kubenswrapper[4824]: W0224 00:11:01.839109 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4767901_4638_4ac8_9c1e_7e61341ddc21.slice/crio-6faf23bbaf504552323f49afcdd6447816bf7709fe5e77f695a1faa41bfc90d0 WatchSource:0}: Error finding container 6faf23bbaf504552323f49afcdd6447816bf7709fe5e77f695a1faa41bfc90d0: Status 404 returned error can't find the container with id 6faf23bbaf504552323f49afcdd6447816bf7709fe5e77f695a1faa41bfc90d0 Feb 24 00:11:02 crc kubenswrapper[4824]: I0224 00:11:02.518203 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" event={"ID":"d4767901-4638-4ac8-9c1e-7e61341ddc21","Type":"ContainerStarted","Data":"d5368730b964ed48604f2db0395f014d1318cc9a4881f28e1ada0f47b8ce9393"} Feb 24 00:11:02 crc kubenswrapper[4824]: I0224 00:11:02.518252 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" event={"ID":"d4767901-4638-4ac8-9c1e-7e61341ddc21","Type":"ContainerStarted","Data":"6faf23bbaf504552323f49afcdd6447816bf7709fe5e77f695a1faa41bfc90d0"} Feb 24 00:11:02 crc kubenswrapper[4824]: I0224 00:11:02.518474 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:02 crc kubenswrapper[4824]: I0224 00:11:02.524615 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" Feb 24 00:11:02 crc kubenswrapper[4824]: I0224 00:11:02.536073 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-748b548fc5-vnmjp" podStartSLOduration=5.536049602 podStartE2EDuration="5.536049602s" podCreationTimestamp="2026-02-24 00:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:11:02.535082935 +0000 UTC m=+326.524707404" watchObservedRunningTime="2026-02-24 00:11:02.536049602 +0000 UTC m=+326.525674081" Feb 24 00:11:02 crc kubenswrapper[4824]: I0224 00:11:02.706034 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b00860ed-9085-40bb-9041-16eac6d88fb1" path="/var/lib/kubelet/pods/b00860ed-9085-40bb-9041-16eac6d88fb1/volumes" Feb 24 00:11:02 crc kubenswrapper[4824]: I0224 00:11:02.707663 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4" path="/var/lib/kubelet/pods/b8af18e1-1420-44fb-b1dc-0d0fb8f6aaf4/volumes" Feb 24 00:11:02 crc kubenswrapper[4824]: I0224 00:11:02.920975 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfzkw"] Feb 24 00:11:02 crc kubenswrapper[4824]: I0224 00:11:02.921245 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mfzkw" podUID="08de7fe0-2d54-408b-8e09-3e1b9bcf931a" containerName="registry-server" containerID="cri-o://508ed9b0876f895bd93f2220b145fd5e2693472af97b22cae0f711fdc293c35c" gracePeriod=2 Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.314335 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfzkw" Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.386135 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08de7fe0-2d54-408b-8e09-3e1b9bcf931a-catalog-content\") pod \"08de7fe0-2d54-408b-8e09-3e1b9bcf931a\" (UID: \"08de7fe0-2d54-408b-8e09-3e1b9bcf931a\") " Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.386222 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k4qw\" (UniqueName: \"kubernetes.io/projected/08de7fe0-2d54-408b-8e09-3e1b9bcf931a-kube-api-access-8k4qw\") pod \"08de7fe0-2d54-408b-8e09-3e1b9bcf931a\" (UID: \"08de7fe0-2d54-408b-8e09-3e1b9bcf931a\") " Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.386274 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08de7fe0-2d54-408b-8e09-3e1b9bcf931a-utilities\") pod \"08de7fe0-2d54-408b-8e09-3e1b9bcf931a\" (UID: \"08de7fe0-2d54-408b-8e09-3e1b9bcf931a\") " Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.387475 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08de7fe0-2d54-408b-8e09-3e1b9bcf931a-utilities" (OuterVolumeSpecName: "utilities") pod "08de7fe0-2d54-408b-8e09-3e1b9bcf931a" (UID: "08de7fe0-2d54-408b-8e09-3e1b9bcf931a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.399777 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08de7fe0-2d54-408b-8e09-3e1b9bcf931a-kube-api-access-8k4qw" (OuterVolumeSpecName: "kube-api-access-8k4qw") pod "08de7fe0-2d54-408b-8e09-3e1b9bcf931a" (UID: "08de7fe0-2d54-408b-8e09-3e1b9bcf931a"). InnerVolumeSpecName "kube-api-access-8k4qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.418191 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08de7fe0-2d54-408b-8e09-3e1b9bcf931a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08de7fe0-2d54-408b-8e09-3e1b9bcf931a" (UID: "08de7fe0-2d54-408b-8e09-3e1b9bcf931a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.487539 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08de7fe0-2d54-408b-8e09-3e1b9bcf931a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.487574 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k4qw\" (UniqueName: \"kubernetes.io/projected/08de7fe0-2d54-408b-8e09-3e1b9bcf931a-kube-api-access-8k4qw\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.487585 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08de7fe0-2d54-408b-8e09-3e1b9bcf931a-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.530569 4824 generic.go:334] "Generic (PLEG): container finished" podID="08de7fe0-2d54-408b-8e09-3e1b9bcf931a" containerID="508ed9b0876f895bd93f2220b145fd5e2693472af97b22cae0f711fdc293c35c" exitCode=0 Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.530641 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfzkw" event={"ID":"08de7fe0-2d54-408b-8e09-3e1b9bcf931a","Type":"ContainerDied","Data":"508ed9b0876f895bd93f2220b145fd5e2693472af97b22cae0f711fdc293c35c"} Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.530669 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfzkw" Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.530712 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfzkw" event={"ID":"08de7fe0-2d54-408b-8e09-3e1b9bcf931a","Type":"ContainerDied","Data":"eef69387238650e8470f3abae2d3e6234452b8da7d8847435def291fdad9a1d8"} Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.530735 4824 scope.go:117] "RemoveContainer" containerID="508ed9b0876f895bd93f2220b145fd5e2693472af97b22cae0f711fdc293c35c" Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.556682 4824 scope.go:117] "RemoveContainer" containerID="c12d91234669c462b12713951c50eb0c0d50df4a3939e1596a445f3e78f2d6f0" Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.563420 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfzkw"] Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.565456 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfzkw"] Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.590534 4824 scope.go:117] "RemoveContainer" containerID="095e0801ed7074f62f4831cd0263857d8e7741a2b32dd7eae5398cdc3839e533" Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.607974 4824 scope.go:117] "RemoveContainer" containerID="508ed9b0876f895bd93f2220b145fd5e2693472af97b22cae0f711fdc293c35c" Feb 24 00:11:03 crc kubenswrapper[4824]: E0224 00:11:03.608925 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"508ed9b0876f895bd93f2220b145fd5e2693472af97b22cae0f711fdc293c35c\": container with ID starting with 508ed9b0876f895bd93f2220b145fd5e2693472af97b22cae0f711fdc293c35c not found: ID does not exist" containerID="508ed9b0876f895bd93f2220b145fd5e2693472af97b22cae0f711fdc293c35c" Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.608996 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"508ed9b0876f895bd93f2220b145fd5e2693472af97b22cae0f711fdc293c35c"} err="failed to get container status \"508ed9b0876f895bd93f2220b145fd5e2693472af97b22cae0f711fdc293c35c\": rpc error: code = NotFound desc = could not find container \"508ed9b0876f895bd93f2220b145fd5e2693472af97b22cae0f711fdc293c35c\": container with ID starting with 508ed9b0876f895bd93f2220b145fd5e2693472af97b22cae0f711fdc293c35c not found: ID does not exist" Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.609030 4824 scope.go:117] "RemoveContainer" containerID="c12d91234669c462b12713951c50eb0c0d50df4a3939e1596a445f3e78f2d6f0" Feb 24 00:11:03 crc kubenswrapper[4824]: E0224 00:11:03.609596 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c12d91234669c462b12713951c50eb0c0d50df4a3939e1596a445f3e78f2d6f0\": container with ID starting with c12d91234669c462b12713951c50eb0c0d50df4a3939e1596a445f3e78f2d6f0 not found: ID does not exist" containerID="c12d91234669c462b12713951c50eb0c0d50df4a3939e1596a445f3e78f2d6f0" Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.609651 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c12d91234669c462b12713951c50eb0c0d50df4a3939e1596a445f3e78f2d6f0"} err="failed to get container status \"c12d91234669c462b12713951c50eb0c0d50df4a3939e1596a445f3e78f2d6f0\": rpc error: code = NotFound desc = could not find container \"c12d91234669c462b12713951c50eb0c0d50df4a3939e1596a445f3e78f2d6f0\": container with ID starting with c12d91234669c462b12713951c50eb0c0d50df4a3939e1596a445f3e78f2d6f0 not found: ID does not exist" Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.609686 4824 scope.go:117] "RemoveContainer" containerID="095e0801ed7074f62f4831cd0263857d8e7741a2b32dd7eae5398cdc3839e533" Feb 24 00:11:03 crc kubenswrapper[4824]: E0224 00:11:03.610040 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"095e0801ed7074f62f4831cd0263857d8e7741a2b32dd7eae5398cdc3839e533\": container with ID starting with 095e0801ed7074f62f4831cd0263857d8e7741a2b32dd7eae5398cdc3839e533 not found: ID does not exist" containerID="095e0801ed7074f62f4831cd0263857d8e7741a2b32dd7eae5398cdc3839e533" Feb 24 00:11:03 crc kubenswrapper[4824]: I0224 00:11:03.610064 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"095e0801ed7074f62f4831cd0263857d8e7741a2b32dd7eae5398cdc3839e533"} err="failed to get container status \"095e0801ed7074f62f4831cd0263857d8e7741a2b32dd7eae5398cdc3839e533\": rpc error: code = NotFound desc = could not find container \"095e0801ed7074f62f4831cd0263857d8e7741a2b32dd7eae5398cdc3839e533\": container with ID starting with 095e0801ed7074f62f4831cd0263857d8e7741a2b32dd7eae5398cdc3839e533 not found: ID does not exist" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.210664 4824 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 00:11:04 crc kubenswrapper[4824]: E0224 00:11:04.210985 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00860ed-9085-40bb-9041-16eac6d88fb1" containerName="extract-utilities" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.211007 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00860ed-9085-40bb-9041-16eac6d88fb1" containerName="extract-utilities" Feb 24 00:11:04 crc kubenswrapper[4824]: E0224 00:11:04.211018 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08de7fe0-2d54-408b-8e09-3e1b9bcf931a" containerName="registry-server" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.211027 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="08de7fe0-2d54-408b-8e09-3e1b9bcf931a" containerName="registry-server" Feb 24 00:11:04 crc kubenswrapper[4824]: E0224 00:11:04.211035 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08de7fe0-2d54-408b-8e09-3e1b9bcf931a" containerName="extract-content" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.211042 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="08de7fe0-2d54-408b-8e09-3e1b9bcf931a" containerName="extract-content" Feb 24 00:11:04 crc kubenswrapper[4824]: E0224 00:11:04.211060 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00860ed-9085-40bb-9041-16eac6d88fb1" containerName="extract-content" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.211067 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00860ed-9085-40bb-9041-16eac6d88fb1" containerName="extract-content" Feb 24 00:11:04 crc kubenswrapper[4824]: E0224 00:11:04.211075 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b00860ed-9085-40bb-9041-16eac6d88fb1" containerName="registry-server" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.211081 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="b00860ed-9085-40bb-9041-16eac6d88fb1" containerName="registry-server" Feb 24 00:11:04 crc kubenswrapper[4824]: E0224 00:11:04.211092 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08de7fe0-2d54-408b-8e09-3e1b9bcf931a" containerName="extract-utilities" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.211100 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="08de7fe0-2d54-408b-8e09-3e1b9bcf931a" containerName="extract-utilities" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.211229 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="08de7fe0-2d54-408b-8e09-3e1b9bcf931a" containerName="registry-server" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.211247 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="b00860ed-9085-40bb-9041-16eac6d88fb1" containerName="registry-server" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.211729 4824 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.211889 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212233 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922" gracePeriod=15 Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212286 4824 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212315 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2" gracePeriod=15 Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212328 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99" gracePeriod=15 Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212429 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc" gracePeriod=15 Feb 24 00:11:04 crc kubenswrapper[4824]: E0224 00:11:04.212452 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212464 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 00:11:04 crc kubenswrapper[4824]: E0224 00:11:04.212474 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212481 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:11:04 crc kubenswrapper[4824]: E0224 00:11:04.212488 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212494 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:11:04 crc kubenswrapper[4824]: E0224 00:11:04.212505 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212528 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212419 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb" gracePeriod=15 Feb 24 00:11:04 crc kubenswrapper[4824]: E0224 00:11:04.212537 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212645 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 24 00:11:04 crc kubenswrapper[4824]: E0224 00:11:04.212676 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212682 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 24 00:11:04 crc kubenswrapper[4824]: E0224 00:11:04.212701 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212708 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 24 00:11:04 crc kubenswrapper[4824]: E0224 00:11:04.212722 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212727 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212920 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212928 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212942 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212953 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212963 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212973 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.212979 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 24 00:11:04 crc kubenswrapper[4824]: E0224 00:11:04.213074 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.213083 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:11:04 crc kubenswrapper[4824]: E0224 00:11:04.213092 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.213100 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.213242 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.213534 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:11:04 crc kubenswrapper[4824]: E0224 00:11:04.251913 4824 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.151:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.300012 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.300096 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.300148 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.300289 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.300344 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.300669 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.300691 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.300765 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.402183 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.402921 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.403002 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.403055 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.403021 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.403085 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.403176 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.403219 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.402590 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.403274 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.403304 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.403467 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.403568 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.403638 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.403783 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.403834 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.541612 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.542683 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.543231 4824 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb" exitCode=2 Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.552965 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:04 crc kubenswrapper[4824]: E0224 00:11:04.597597 4824 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.151:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1897064351af0018 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:11:04.596578328 +0000 UTC m=+328.586202797,LastTimestamp:2026-02-24 00:11:04.596578328 +0000 UTC m=+328.586202797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:11:04 crc kubenswrapper[4824]: I0224 00:11:04.701091 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08de7fe0-2d54-408b-8e09-3e1b9bcf931a" path="/var/lib/kubelet/pods/08de7fe0-2d54-408b-8e09-3e1b9bcf931a/volumes" Feb 24 00:11:05 crc kubenswrapper[4824]: E0224 00:11:05.179328 4824 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:05 crc kubenswrapper[4824]: E0224 00:11:05.179833 4824 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:05 crc kubenswrapper[4824]: E0224 00:11:05.180187 4824 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:05 crc kubenswrapper[4824]: E0224 00:11:05.180509 4824 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:05 crc kubenswrapper[4824]: E0224 00:11:05.181018 4824 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:05 crc kubenswrapper[4824]: I0224 00:11:05.181256 4824 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 24 00:11:05 crc kubenswrapper[4824]: E0224 00:11:05.181590 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="200ms" Feb 24 00:11:05 crc kubenswrapper[4824]: E0224 00:11:05.382576 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="400ms" Feb 24 00:11:05 crc kubenswrapper[4824]: I0224 00:11:05.552283 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 24 00:11:05 crc kubenswrapper[4824]: I0224 00:11:05.554890 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 00:11:05 crc kubenswrapper[4824]: I0224 00:11:05.557834 4824 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99" exitCode=0 Feb 24 00:11:05 crc kubenswrapper[4824]: I0224 00:11:05.557883 4824 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2" exitCode=0 Feb 24 00:11:05 crc kubenswrapper[4824]: I0224 00:11:05.557900 4824 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc" exitCode=0 Feb 24 00:11:05 crc kubenswrapper[4824]: I0224 00:11:05.557948 4824 scope.go:117] "RemoveContainer" containerID="b537bda4e4e7c025e1c81ad716f22312ab00e988cca42f2bed056b96ed0a594a" Feb 24 00:11:05 crc kubenswrapper[4824]: I0224 00:11:05.560707 4824 generic.go:334] "Generic (PLEG): container finished" podID="466928f3-88e1-4111-8358-13db2bd5ba58" containerID="713672e1f8b7451df74461c3a60e57bbab1ffd950fc9f64d1a805ac3787f3127" exitCode=0 Feb 24 00:11:05 crc kubenswrapper[4824]: I0224 00:11:05.560828 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"466928f3-88e1-4111-8358-13db2bd5ba58","Type":"ContainerDied","Data":"713672e1f8b7451df74461c3a60e57bbab1ffd950fc9f64d1a805ac3787f3127"} Feb 24 00:11:05 crc kubenswrapper[4824]: I0224 00:11:05.561622 4824 status_manager.go:851] "Failed to get status for pod" podUID="466928f3-88e1-4111-8358-13db2bd5ba58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:05 crc kubenswrapper[4824]: I0224 00:11:05.564028 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"552a62587bcd4bb19b0331dbaea1c6531dfe36fcaa945b33259b7c1bdcace817"} Feb 24 00:11:05 crc kubenswrapper[4824]: E0224 00:11:05.784354 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="800ms" Feb 24 00:11:06 crc kubenswrapper[4824]: I0224 00:11:06.572149 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1491c4b381f60602c7a24a8e5a735a39ea625073883a8aa7fd3568c56da3a7e7"} Feb 24 00:11:06 crc kubenswrapper[4824]: E0224 00:11:06.585086 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="1.6s" Feb 24 00:11:06 crc kubenswrapper[4824]: I0224 00:11:06.697877 4824 status_manager.go:851] "Failed to get status for pod" podUID="466928f3-88e1-4111-8358-13db2bd5ba58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:06 crc kubenswrapper[4824]: I0224 00:11:06.910103 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:11:06 crc kubenswrapper[4824]: I0224 00:11:06.910752 4824 status_manager.go:851] "Failed to get status for pod" podUID="466928f3-88e1-4111-8358-13db2bd5ba58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.042808 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/466928f3-88e1-4111-8358-13db2bd5ba58-kubelet-dir\") pod \"466928f3-88e1-4111-8358-13db2bd5ba58\" (UID: \"466928f3-88e1-4111-8358-13db2bd5ba58\") " Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.042914 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/466928f3-88e1-4111-8358-13db2bd5ba58-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "466928f3-88e1-4111-8358-13db2bd5ba58" (UID: "466928f3-88e1-4111-8358-13db2bd5ba58"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.042974 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/466928f3-88e1-4111-8358-13db2bd5ba58-kube-api-access\") pod \"466928f3-88e1-4111-8358-13db2bd5ba58\" (UID: \"466928f3-88e1-4111-8358-13db2bd5ba58\") " Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.043004 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/466928f3-88e1-4111-8358-13db2bd5ba58-var-lock\") pod \"466928f3-88e1-4111-8358-13db2bd5ba58\" (UID: \"466928f3-88e1-4111-8358-13db2bd5ba58\") " Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.043193 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/466928f3-88e1-4111-8358-13db2bd5ba58-var-lock" (OuterVolumeSpecName: "var-lock") pod "466928f3-88e1-4111-8358-13db2bd5ba58" (UID: "466928f3-88e1-4111-8358-13db2bd5ba58"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.043364 4824 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/466928f3-88e1-4111-8358-13db2bd5ba58-var-lock\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.043392 4824 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/466928f3-88e1-4111-8358-13db2bd5ba58-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.078959 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/466928f3-88e1-4111-8358-13db2bd5ba58-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "466928f3-88e1-4111-8358-13db2bd5ba58" (UID: "466928f3-88e1-4111-8358-13db2bd5ba58"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.144174 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/466928f3-88e1-4111-8358-13db2bd5ba58-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.590228 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.591023 4824 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922" exitCode=0 Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.593397 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"466928f3-88e1-4111-8358-13db2bd5ba58","Type":"ContainerDied","Data":"3e14bb11973a03d1681cf1c9d6b14d165f03b40a1a2b66d541ff08ad0753d14f"} Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.593460 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e14bb11973a03d1681cf1c9d6b14d165f03b40a1a2b66d541ff08ad0753d14f" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.593858 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.593891 4824 status_manager.go:851] "Failed to get status for pod" podUID="466928f3-88e1-4111-8358-13db2bd5ba58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:07 crc kubenswrapper[4824]: E0224 00:11:07.594093 4824 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.151:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.606793 4824 status_manager.go:851] "Failed to get status for pod" podUID="466928f3-88e1-4111-8358-13db2bd5ba58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.776127 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.776944 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.778169 4824 status_manager.go:851] "Failed to get status for pod" podUID="466928f3-88e1-4111-8358-13db2bd5ba58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.778717 4824 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.853059 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.853177 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.853192 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.853245 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.853337 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.853449 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.853807 4824 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.853829 4824 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:07 crc kubenswrapper[4824]: I0224 00:11:07.853841 4824 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:08 crc kubenswrapper[4824]: E0224 00:11:08.186610 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="3.2s" Feb 24 00:11:08 crc kubenswrapper[4824]: E0224 00:11:08.512563 4824 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.151:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1897064351af0018 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:11:04.596578328 +0000 UTC m=+328.586202797,LastTimestamp:2026-02-24 00:11:04.596578328 +0000 UTC m=+328.586202797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:11:08 crc kubenswrapper[4824]: I0224 00:11:08.606315 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 00:11:08 crc kubenswrapper[4824]: I0224 00:11:08.608000 4824 scope.go:117] "RemoveContainer" containerID="db87a346b00854fb6769899a263eb5f7fad8cfa115365e2197c0e02f7285ff99" Feb 24 00:11:08 crc kubenswrapper[4824]: I0224 00:11:08.608080 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:08 crc kubenswrapper[4824]: I0224 00:11:08.622361 4824 status_manager.go:851] "Failed to get status for pod" podUID="466928f3-88e1-4111-8358-13db2bd5ba58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:08 crc kubenswrapper[4824]: I0224 00:11:08.623248 4824 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:08 crc kubenswrapper[4824]: I0224 00:11:08.627304 4824 scope.go:117] "RemoveContainer" containerID="81bb21556a01a403315d2c9623aa07ab39ac4a789433423a89eda1da3547b6d2" Feb 24 00:11:08 crc kubenswrapper[4824]: I0224 00:11:08.642885 4824 scope.go:117] "RemoveContainer" containerID="6dd9b54ae9c9e895db2191be45a25d352d8c0617350519ad21f6ed3f2af941fc" Feb 24 00:11:08 crc kubenswrapper[4824]: I0224 00:11:08.655640 4824 scope.go:117] "RemoveContainer" containerID="bf3a76ba41dba79001ebad9083600800b0f094accb5dc60c2a364aa001909feb" Feb 24 00:11:08 crc kubenswrapper[4824]: I0224 00:11:08.672117 4824 scope.go:117] "RemoveContainer" containerID="09c8ceeca4d71401bcd6ae7d050ac58f840a0cc91f699f91f799930c896be922" Feb 24 00:11:08 crc kubenswrapper[4824]: I0224 00:11:08.686316 4824 scope.go:117] "RemoveContainer" containerID="5fec537d7330c01eaddfb379d343ead1a363fd5b2894edf23dcd5c6218a0302a" Feb 24 00:11:08 crc kubenswrapper[4824]: I0224 00:11:08.702759 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 24 00:11:09 crc kubenswrapper[4824]: E0224 00:11:09.729979 4824 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.151:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" volumeName="registry-storage" Feb 24 00:11:10 crc kubenswrapper[4824]: I0224 00:11:10.800749 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:11:10 crc kubenswrapper[4824]: I0224 00:11:10.800940 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:11:10 crc kubenswrapper[4824]: W0224 00:11:10.802942 4824 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27342": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:11:10 crc kubenswrapper[4824]: E0224 00:11:10.803667 4824 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27342\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:11:10 crc kubenswrapper[4824]: I0224 00:11:10.804080 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:11:10 crc kubenswrapper[4824]: I0224 00:11:10.804918 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:11:10 crc kubenswrapper[4824]: W0224 00:11:10.804983 4824 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27341": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:11:10 crc kubenswrapper[4824]: E0224 00:11:10.805760 4824 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27341\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:11:10 crc kubenswrapper[4824]: W0224 00:11:10.805752 4824 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27342": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:11:10 crc kubenswrapper[4824]: E0224 00:11:10.805900 4824 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27342\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:11:11 crc kubenswrapper[4824]: E0224 00:11:11.388057 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="6.4s" Feb 24 00:11:11 crc kubenswrapper[4824]: E0224 00:11:11.801868 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 24 00:11:11 crc kubenswrapper[4824]: E0224 00:11:11.802240 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 24 00:11:11 crc kubenswrapper[4824]: W0224 00:11:11.802655 4824 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27342": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:11:11 crc kubenswrapper[4824]: E0224 00:11:11.802742 4824 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27342\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:11:11 crc kubenswrapper[4824]: E0224 00:11:11.805192 4824 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 00:11:11 crc kubenswrapper[4824]: E0224 00:11:11.805224 4824 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: failed to sync configmap cache: timed out waiting for the condition Feb 24 00:11:11 crc kubenswrapper[4824]: E0224 00:11:11.805331 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:13:13.8052619 +0000 UTC m=+457.794886369 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync secret cache: timed out waiting for the condition Feb 24 00:11:11 crc kubenswrapper[4824]: E0224 00:11:11.805362 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:13:13.805350132 +0000 UTC m=+457.794974601 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync configmap cache: timed out waiting for the condition Feb 24 00:11:12 crc kubenswrapper[4824]: E0224 00:11:12.803146 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 24 00:11:12 crc kubenswrapper[4824]: E0224 00:11:12.803450 4824 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: failed to sync configmap cache: timed out waiting for the condition Feb 24 00:11:12 crc kubenswrapper[4824]: E0224 00:11:12.803191 4824 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 24 00:11:12 crc kubenswrapper[4824]: E0224 00:11:12.803493 4824 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: failed to sync configmap cache: timed out waiting for the condition Feb 24 00:11:12 crc kubenswrapper[4824]: E0224 00:11:12.803537 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 00:13:14.803495712 +0000 UTC m=+458.793120181 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : failed to sync configmap cache: timed out waiting for the condition Feb 24 00:11:12 crc kubenswrapper[4824]: E0224 00:11:12.803576 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 00:13:14.803556804 +0000 UTC m=+458.793181293 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : failed to sync configmap cache: timed out waiting for the condition Feb 24 00:11:12 crc kubenswrapper[4824]: W0224 00:11:12.970927 4824 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27341": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:11:12 crc kubenswrapper[4824]: E0224 00:11:12.971714 4824 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27341\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:11:13 crc kubenswrapper[4824]: W0224 00:11:13.230362 4824 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27342": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:11:13 crc kubenswrapper[4824]: E0224 00:11:13.230493 4824 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27342\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:11:13 crc kubenswrapper[4824]: W0224 00:11:13.725456 4824 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27342": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:11:13 crc kubenswrapper[4824]: E0224 00:11:13.725611 4824 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27342\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:11:14 crc kubenswrapper[4824]: W0224 00:11:14.530891 4824 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27342": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:11:14 crc kubenswrapper[4824]: E0224 00:11:14.531038 4824 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27342\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:11:16 crc kubenswrapper[4824]: I0224 00:11:16.699309 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:16 crc kubenswrapper[4824]: I0224 00:11:16.700665 4824 status_manager.go:851] "Failed to get status for pod" podUID="466928f3-88e1-4111-8358-13db2bd5ba58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:16 crc kubenswrapper[4824]: I0224 00:11:16.701232 4824 status_manager.go:851] "Failed to get status for pod" podUID="466928f3-88e1-4111-8358-13db2bd5ba58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:16 crc kubenswrapper[4824]: I0224 00:11:16.722275 4824 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5209eb40-b077-42f2-9239-27bf1cde0e05" Feb 24 00:11:16 crc kubenswrapper[4824]: I0224 00:11:16.722692 4824 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5209eb40-b077-42f2-9239-27bf1cde0e05" Feb 24 00:11:16 crc kubenswrapper[4824]: E0224 00:11:16.723425 4824 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:16 crc kubenswrapper[4824]: I0224 00:11:16.724091 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:17 crc kubenswrapper[4824]: I0224 00:11:17.675029 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 24 00:11:17 crc kubenswrapper[4824]: I0224 00:11:17.677182 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 24 00:11:17 crc kubenswrapper[4824]: I0224 00:11:17.677258 4824 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="45746da193a3aaa64e64067e2324afb4e4fb1a90a1357461ed97c0bbb5109f5b" exitCode=1 Feb 24 00:11:17 crc kubenswrapper[4824]: I0224 00:11:17.677354 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"45746da193a3aaa64e64067e2324afb4e4fb1a90a1357461ed97c0bbb5109f5b"} Feb 24 00:11:17 crc kubenswrapper[4824]: I0224 00:11:17.678141 4824 scope.go:117] "RemoveContainer" containerID="45746da193a3aaa64e64067e2324afb4e4fb1a90a1357461ed97c0bbb5109f5b" Feb 24 00:11:17 crc kubenswrapper[4824]: I0224 00:11:17.678470 4824 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:17 crc kubenswrapper[4824]: I0224 00:11:17.679584 4824 status_manager.go:851] "Failed to get status for pod" podUID="466928f3-88e1-4111-8358-13db2bd5ba58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:17 crc kubenswrapper[4824]: I0224 00:11:17.680821 4824 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="88e1482fbfe26a9ddc7dd159c8b550ccebd195491516483ad9ac1b92c7444bc3" exitCode=0 Feb 24 00:11:17 crc kubenswrapper[4824]: I0224 00:11:17.680867 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"88e1482fbfe26a9ddc7dd159c8b550ccebd195491516483ad9ac1b92c7444bc3"} Feb 24 00:11:17 crc kubenswrapper[4824]: I0224 00:11:17.680942 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9bffb7df3529fff53c3ecfb1a3689d2a402f10e28e581ffcfa2f90e68fcb4cf4"} Feb 24 00:11:17 crc kubenswrapper[4824]: I0224 00:11:17.681614 4824 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5209eb40-b077-42f2-9239-27bf1cde0e05" Feb 24 00:11:17 crc kubenswrapper[4824]: I0224 00:11:17.681658 4824 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5209eb40-b077-42f2-9239-27bf1cde0e05" Feb 24 00:11:17 crc kubenswrapper[4824]: I0224 00:11:17.682195 4824 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:17 crc kubenswrapper[4824]: E0224 00:11:17.682330 4824 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:17 crc kubenswrapper[4824]: I0224 00:11:17.682692 4824 status_manager.go:851] "Failed to get status for pod" podUID="466928f3-88e1-4111-8358-13db2bd5ba58" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Feb 24 00:11:17 crc kubenswrapper[4824]: E0224 00:11:17.789402 4824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="7s" Feb 24 00:11:17 crc kubenswrapper[4824]: W0224 00:11:17.908757 4824 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27342": dial tcp 38.102.83.151:6443: connect: connection refused Feb 24 00:11:17 crc kubenswrapper[4824]: E0224 00:11:17.908849 4824 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27342\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:11:18 crc kubenswrapper[4824]: I0224 00:11:18.033954 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:11:18 crc kubenswrapper[4824]: I0224 00:11:18.689733 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 24 00:11:18 crc kubenswrapper[4824]: I0224 00:11:18.691808 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 24 00:11:18 crc kubenswrapper[4824]: I0224 00:11:18.691906 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"34eb8fb83da5aca983d6da868242cce539ecbefeda8efd3d70063bb191fa81ec"} Feb 24 00:11:18 crc kubenswrapper[4824]: I0224 00:11:18.703075 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2be20a298d84dcd9e3ae2da09fa4ae09872abedbc6a4cc9dd61ad0b7d4398737"} Feb 24 00:11:18 crc kubenswrapper[4824]: I0224 00:11:18.703131 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"787b3d96926e01383a9793acd1ef1b226f2a21553ca02562140611f205b2b7ec"} Feb 24 00:11:18 crc kubenswrapper[4824]: I0224 00:11:18.703151 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2edaa15be988babb6641b30c4b06a22b37ddfe81a7e261de5cabd621e2676659"} Feb 24 00:11:19 crc kubenswrapper[4824]: I0224 00:11:19.712750 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"47837f319ab77f6353d3859929f2dcc0baf720494b33907d4bfb682b813c7a3a"} Feb 24 00:11:19 crc kubenswrapper[4824]: I0224 00:11:19.712808 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e2a31b42788232608657e2d119d71b3e07fa9823ac32076ef7f5b0fcc0ea205b"} Feb 24 00:11:19 crc kubenswrapper[4824]: I0224 00:11:19.713061 4824 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5209eb40-b077-42f2-9239-27bf1cde0e05" Feb 24 00:11:19 crc kubenswrapper[4824]: I0224 00:11:19.713078 4824 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5209eb40-b077-42f2-9239-27bf1cde0e05" Feb 24 00:11:20 crc kubenswrapper[4824]: E0224 00:11:20.710183 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-s2dwl], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:11:21 crc kubenswrapper[4824]: E0224 00:11:21.717792 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert nginx-conf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:11:21 crc kubenswrapper[4824]: I0224 00:11:21.724346 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:21 crc kubenswrapper[4824]: I0224 00:11:21.724390 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:21 crc kubenswrapper[4824]: I0224 00:11:21.730134 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:21 crc kubenswrapper[4824]: E0224 00:11:21.748821 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cqllr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:11:24 crc kubenswrapper[4824]: I0224 00:11:24.445822 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 24 00:11:24 crc kubenswrapper[4824]: I0224 00:11:24.703787 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 00:11:24 crc kubenswrapper[4824]: I0224 00:11:24.723144 4824 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:24 crc kubenswrapper[4824]: I0224 00:11:24.764131 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 00:11:25 crc kubenswrapper[4824]: I0224 00:11:25.744573 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:25 crc kubenswrapper[4824]: I0224 00:11:25.744695 4824 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5209eb40-b077-42f2-9239-27bf1cde0e05" Feb 24 00:11:25 crc kubenswrapper[4824]: I0224 00:11:25.744739 4824 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5209eb40-b077-42f2-9239-27bf1cde0e05" Feb 24 00:11:25 crc kubenswrapper[4824]: I0224 00:11:25.751946 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:26 crc kubenswrapper[4824]: I0224 00:11:26.533705 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:11:26 crc kubenswrapper[4824]: I0224 00:11:26.708582 4824 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8f81cf07-5ee1-44c0-bd9f-eda3410b1085" Feb 24 00:11:26 crc kubenswrapper[4824]: I0224 00:11:26.749714 4824 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5209eb40-b077-42f2-9239-27bf1cde0e05" Feb 24 00:11:26 crc kubenswrapper[4824]: I0224 00:11:26.749745 4824 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5209eb40-b077-42f2-9239-27bf1cde0e05" Feb 24 00:11:26 crc kubenswrapper[4824]: I0224 00:11:26.753128 4824 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8f81cf07-5ee1-44c0-bd9f-eda3410b1085" Feb 24 00:11:26 crc kubenswrapper[4824]: I0224 00:11:26.978012 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 24 00:11:27 crc kubenswrapper[4824]: I0224 00:11:27.756458 4824 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5209eb40-b077-42f2-9239-27bf1cde0e05" Feb 24 00:11:27 crc kubenswrapper[4824]: I0224 00:11:27.756502 4824 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5209eb40-b077-42f2-9239-27bf1cde0e05" Feb 24 00:11:27 crc kubenswrapper[4824]: I0224 00:11:27.760928 4824 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8f81cf07-5ee1-44c0-bd9f-eda3410b1085" Feb 24 00:11:28 crc kubenswrapper[4824]: I0224 00:11:28.030021 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:11:28 crc kubenswrapper[4824]: I0224 00:11:28.030672 4824 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 24 00:11:28 crc kubenswrapper[4824]: I0224 00:11:28.030810 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 24 00:11:32 crc kubenswrapper[4824]: I0224 00:11:32.693892 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:11:33 crc kubenswrapper[4824]: I0224 00:11:33.693188 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:11:34 crc kubenswrapper[4824]: I0224 00:11:34.841852 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 24 00:11:35 crc kubenswrapper[4824]: I0224 00:11:35.183024 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 24 00:11:35 crc kubenswrapper[4824]: I0224 00:11:35.397564 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 24 00:11:35 crc kubenswrapper[4824]: I0224 00:11:35.591198 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 00:11:35 crc kubenswrapper[4824]: I0224 00:11:35.617861 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 00:11:36 crc kubenswrapper[4824]: I0224 00:11:36.324886 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 00:11:36 crc kubenswrapper[4824]: I0224 00:11:36.355323 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 24 00:11:36 crc kubenswrapper[4824]: I0224 00:11:36.577705 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 00:11:36 crc kubenswrapper[4824]: I0224 00:11:36.693821 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:11:36 crc kubenswrapper[4824]: I0224 00:11:36.765576 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 24 00:11:36 crc kubenswrapper[4824]: I0224 00:11:36.849819 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 24 00:11:36 crc kubenswrapper[4824]: I0224 00:11:36.887997 4824 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 24 00:11:37 crc kubenswrapper[4824]: I0224 00:11:37.004464 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 00:11:37 crc kubenswrapper[4824]: I0224 00:11:37.197489 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 00:11:37 crc kubenswrapper[4824]: I0224 00:11:37.329526 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 24 00:11:37 crc kubenswrapper[4824]: I0224 00:11:37.367812 4824 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 00:11:37 crc kubenswrapper[4824]: I0224 00:11:37.693425 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.031246 4824 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.031352 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.109187 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.118042 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.221305 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.224692 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.228853 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.232025 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.238202 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.354579 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.380478 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.384372 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.442711 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.444317 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.543460 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.730461 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.860399 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.871736 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 24 00:11:38 crc kubenswrapper[4824]: I0224 00:11:38.899784 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 00:11:39 crc kubenswrapper[4824]: I0224 00:11:39.023592 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 00:11:39 crc kubenswrapper[4824]: I0224 00:11:39.028370 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 00:11:39 crc kubenswrapper[4824]: I0224 00:11:39.131235 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 00:11:39 crc kubenswrapper[4824]: I0224 00:11:39.148623 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 24 00:11:39 crc kubenswrapper[4824]: I0224 00:11:39.249575 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 24 00:11:39 crc kubenswrapper[4824]: I0224 00:11:39.264593 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 00:11:39 crc kubenswrapper[4824]: I0224 00:11:39.469362 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 24 00:11:39 crc kubenswrapper[4824]: I0224 00:11:39.503482 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 24 00:11:39 crc kubenswrapper[4824]: I0224 00:11:39.548248 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 00:11:39 crc kubenswrapper[4824]: I0224 00:11:39.552279 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 24 00:11:39 crc kubenswrapper[4824]: I0224 00:11:39.576239 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 00:11:39 crc kubenswrapper[4824]: I0224 00:11:39.580861 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 24 00:11:39 crc kubenswrapper[4824]: I0224 00:11:39.610268 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 24 00:11:39 crc kubenswrapper[4824]: I0224 00:11:39.660455 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 00:11:39 crc kubenswrapper[4824]: I0224 00:11:39.688441 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 00:11:39 crc kubenswrapper[4824]: I0224 00:11:39.791567 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 00:11:39 crc kubenswrapper[4824]: I0224 00:11:39.905298 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 00:11:40 crc kubenswrapper[4824]: I0224 00:11:40.009775 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 24 00:11:40 crc kubenswrapper[4824]: I0224 00:11:40.405204 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 24 00:11:40 crc kubenswrapper[4824]: I0224 00:11:40.447126 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 24 00:11:40 crc kubenswrapper[4824]: I0224 00:11:40.515829 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 00:11:40 crc kubenswrapper[4824]: I0224 00:11:40.579938 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 00:11:40 crc kubenswrapper[4824]: I0224 00:11:40.642339 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 24 00:11:40 crc kubenswrapper[4824]: I0224 00:11:40.660935 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 00:11:40 crc kubenswrapper[4824]: I0224 00:11:40.720300 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 00:11:40 crc kubenswrapper[4824]: I0224 00:11:40.751679 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 00:11:40 crc kubenswrapper[4824]: I0224 00:11:40.760534 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 00:11:40 crc kubenswrapper[4824]: I0224 00:11:40.809174 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 24 00:11:40 crc kubenswrapper[4824]: I0224 00:11:40.811335 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 24 00:11:40 crc kubenswrapper[4824]: I0224 00:11:40.882051 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 24 00:11:40 crc kubenswrapper[4824]: I0224 00:11:40.893466 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.014367 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.068977 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.085975 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.157331 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.223824 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.225542 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.280685 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.330075 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.372359 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.391101 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.454259 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.538210 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.558046 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.633422 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.643242 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.665817 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.715634 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.806458 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 00:11:41 crc kubenswrapper[4824]: I0224 00:11:41.923832 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 24 00:11:42 crc kubenswrapper[4824]: I0224 00:11:42.032550 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 00:11:42 crc kubenswrapper[4824]: I0224 00:11:42.037667 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 24 00:11:42 crc kubenswrapper[4824]: I0224 00:11:42.153313 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 00:11:42 crc kubenswrapper[4824]: I0224 00:11:42.244213 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 00:11:42 crc kubenswrapper[4824]: I0224 00:11:42.274779 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 24 00:11:42 crc kubenswrapper[4824]: I0224 00:11:42.289064 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 24 00:11:42 crc kubenswrapper[4824]: I0224 00:11:42.345228 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 00:11:42 crc kubenswrapper[4824]: I0224 00:11:42.347776 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 24 00:11:42 crc kubenswrapper[4824]: I0224 00:11:42.398248 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 00:11:42 crc kubenswrapper[4824]: I0224 00:11:42.417530 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 24 00:11:42 crc kubenswrapper[4824]: I0224 00:11:42.502385 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 24 00:11:42 crc kubenswrapper[4824]: I0224 00:11:42.518812 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 00:11:42 crc kubenswrapper[4824]: I0224 00:11:42.573948 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 00:11:42 crc kubenswrapper[4824]: I0224 00:11:42.580418 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 00:11:42 crc kubenswrapper[4824]: I0224 00:11:42.650952 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 00:11:42 crc kubenswrapper[4824]: I0224 00:11:42.686433 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 00:11:42 crc kubenswrapper[4824]: I0224 00:11:42.915672 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 24 00:11:42 crc kubenswrapper[4824]: I0224 00:11:42.992823 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.019935 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.030139 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.113197 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.120055 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.121186 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.297044 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.308283 4824 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.311968 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.312012 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.316790 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.338387 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.341982 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.341962438 podStartE2EDuration="19.341962438s" podCreationTimestamp="2026-02-24 00:11:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:11:43.338855873 +0000 UTC m=+367.328480372" watchObservedRunningTime="2026-02-24 00:11:43.341962438 +0000 UTC m=+367.331586907" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.346072 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.378958 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.387537 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.460808 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.479425 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.639656 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.701756 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.768997 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.818704 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.866316 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 24 00:11:43 crc kubenswrapper[4824]: I0224 00:11:43.994217 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.028746 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.051858 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.073545 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.073816 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.108989 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.167277 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.187367 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.240375 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.259129 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.259479 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.260096 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.305852 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.436103 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.496014 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.533598 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.586284 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.603628 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.604113 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.825557 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.843413 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.868335 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.882361 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.952488 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 00:11:44 crc kubenswrapper[4824]: I0224 00:11:44.970699 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 24 00:11:45 crc kubenswrapper[4824]: I0224 00:11:45.033599 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 24 00:11:45 crc kubenswrapper[4824]: I0224 00:11:45.039018 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 00:11:45 crc kubenswrapper[4824]: I0224 00:11:45.088340 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 24 00:11:45 crc kubenswrapper[4824]: I0224 00:11:45.104794 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 24 00:11:45 crc kubenswrapper[4824]: I0224 00:11:45.241882 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 00:11:45 crc kubenswrapper[4824]: I0224 00:11:45.258573 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 00:11:45 crc kubenswrapper[4824]: I0224 00:11:45.292256 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 24 00:11:45 crc kubenswrapper[4824]: I0224 00:11:45.521418 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 00:11:45 crc kubenswrapper[4824]: I0224 00:11:45.602990 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 24 00:11:45 crc kubenswrapper[4824]: I0224 00:11:45.671959 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 00:11:45 crc kubenswrapper[4824]: I0224 00:11:45.717383 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 24 00:11:45 crc kubenswrapper[4824]: I0224 00:11:45.730107 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 24 00:11:45 crc kubenswrapper[4824]: I0224 00:11:45.736508 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 00:11:45 crc kubenswrapper[4824]: I0224 00:11:45.744449 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 00:11:45 crc kubenswrapper[4824]: I0224 00:11:45.785486 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 24 00:11:45 crc kubenswrapper[4824]: I0224 00:11:45.829062 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 24 00:11:45 crc kubenswrapper[4824]: I0224 00:11:45.904507 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.011639 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.081865 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.108744 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.143646 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.146743 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.167263 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.272782 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.294675 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.340411 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.442628 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.465248 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.557958 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.590863 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.683495 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.689104 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.699025 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.776836 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.840593 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.875668 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 24 00:11:46 crc kubenswrapper[4824]: I0224 00:11:46.943830 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.105878 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.165745 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.195032 4824 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.195401 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://1491c4b381f60602c7a24a8e5a735a39ea625073883a8aa7fd3568c56da3a7e7" gracePeriod=5 Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.197111 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.293683 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.301936 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.329016 4824 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.374149 4824 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.420998 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.433732 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.518161 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.586403 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.626211 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.665137 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.673080 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.692321 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.821679 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.842709 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 00:11:47 crc kubenswrapper[4824]: I0224 00:11:47.919924 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.030963 4824 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.031033 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.031102 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.033055 4824 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"34eb8fb83da5aca983d6da868242cce539ecbefeda8efd3d70063bb191fa81ec"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.033269 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://34eb8fb83da5aca983d6da868242cce539ecbefeda8efd3d70063bb191fa81ec" gracePeriod=30 Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.094865 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.127882 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.128559 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.198643 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.211565 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.221937 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.231033 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.241783 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.360124 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.496589 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.608226 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.618399 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.713618 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.838038 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.869109 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 00:11:48 crc kubenswrapper[4824]: I0224 00:11:48.922738 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 00:11:49 crc kubenswrapper[4824]: I0224 00:11:49.025490 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 24 00:11:49 crc kubenswrapper[4824]: I0224 00:11:49.033156 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 00:11:49 crc kubenswrapper[4824]: I0224 00:11:49.109640 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 24 00:11:49 crc kubenswrapper[4824]: I0224 00:11:49.176910 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 24 00:11:49 crc kubenswrapper[4824]: I0224 00:11:49.239150 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 00:11:49 crc kubenswrapper[4824]: I0224 00:11:49.241016 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 24 00:11:49 crc kubenswrapper[4824]: I0224 00:11:49.333125 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 00:11:49 crc kubenswrapper[4824]: I0224 00:11:49.376919 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 00:11:49 crc kubenswrapper[4824]: I0224 00:11:49.390630 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 24 00:11:49 crc kubenswrapper[4824]: I0224 00:11:49.487927 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 24 00:11:49 crc kubenswrapper[4824]: I0224 00:11:49.548761 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 00:11:49 crc kubenswrapper[4824]: I0224 00:11:49.567457 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 00:11:49 crc kubenswrapper[4824]: I0224 00:11:49.607552 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 24 00:11:49 crc kubenswrapper[4824]: I0224 00:11:49.662922 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 00:11:49 crc kubenswrapper[4824]: I0224 00:11:49.663557 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 24 00:11:49 crc kubenswrapper[4824]: I0224 00:11:49.688101 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 24 00:11:49 crc kubenswrapper[4824]: I0224 00:11:49.760701 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 00:11:49 crc kubenswrapper[4824]: I0224 00:11:49.870212 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.125048 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.134970 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.150236 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.290686 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.353561 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dmjz7"] Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.353944 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dmjz7" podUID="cc119514-5c95-4925-8a1a-3e6844a34e1e" containerName="registry-server" containerID="cri-o://ace099e248827a099f2f9964ea688049a17101eb26b429197dc321a43c82c971" gracePeriod=30 Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.372402 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hhftg"] Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.373225 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hhftg" podUID="3e306ddf-071d-47f2-b9b1-bf772963438e" containerName="registry-server" containerID="cri-o://83d0a00bbb287f8c717cb0e93e56c8a769b62fbe8a1114585fcf0819cddb1d85" gracePeriod=30 Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.391393 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-99tkw"] Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.391698 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" podUID="e312a49f-dc7a-49fc-9baf-3105fec587ae" containerName="marketplace-operator" containerID="cri-o://1f7f84523e39d2e74db2895c5b1819295512a987f6083e74a45c4c25f78e706d" gracePeriod=30 Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.391861 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.404632 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nzqwf"] Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.404714 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gl27t"] Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.404982 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gl27t" podUID="2da73289-3f96-4828-a106-46c3b0469e7d" containerName="registry-server" containerID="cri-o://f3129bb41cd26ff02fd1b16661272cba00c8572d524dd0295795e4e681de10f0" gracePeriod=30 Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.405681 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nzqwf" podUID="b142d96b-87c3-444b-b135-fdddaa658234" containerName="registry-server" containerID="cri-o://d8cb34947ec733a964cff732c9bb70c2d8c98ea3a605270b5ec9f8c81b631a37" gracePeriod=30 Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.407760 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zxplg"] Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.408078 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zxplg" podUID="7a78c7d6-6ec6-4857-af87-25c5c8cf961d" containerName="registry-server" containerID="cri-o://da0889e558f40879c1fabc025b056771135a794508cda04386b8dc975f871518" gracePeriod=30 Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.469026 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.605569 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.677264 4824 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 00:11:50 crc kubenswrapper[4824]: E0224 00:11:50.820590 4824 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of da0889e558f40879c1fabc025b056771135a794508cda04386b8dc975f871518 is running failed: container process not found" containerID="da0889e558f40879c1fabc025b056771135a794508cda04386b8dc975f871518" cmd=["grpc_health_probe","-addr=:50051"] Feb 24 00:11:50 crc kubenswrapper[4824]: E0224 00:11:50.822073 4824 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of da0889e558f40879c1fabc025b056771135a794508cda04386b8dc975f871518 is running failed: container process not found" containerID="da0889e558f40879c1fabc025b056771135a794508cda04386b8dc975f871518" cmd=["grpc_health_probe","-addr=:50051"] Feb 24 00:11:50 crc kubenswrapper[4824]: E0224 00:11:50.822502 4824 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of da0889e558f40879c1fabc025b056771135a794508cda04386b8dc975f871518 is running failed: container process not found" containerID="da0889e558f40879c1fabc025b056771135a794508cda04386b8dc975f871518" cmd=["grpc_health_probe","-addr=:50051"] Feb 24 00:11:50 crc kubenswrapper[4824]: E0224 00:11:50.822614 4824 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of da0889e558f40879c1fabc025b056771135a794508cda04386b8dc975f871518 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-zxplg" podUID="7a78c7d6-6ec6-4857-af87-25c5c8cf961d" containerName="registry-server" Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.861002 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dmjz7" Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.908319 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.917180 4824 generic.go:334] "Generic (PLEG): container finished" podID="3e306ddf-071d-47f2-b9b1-bf772963438e" containerID="83d0a00bbb287f8c717cb0e93e56c8a769b62fbe8a1114585fcf0819cddb1d85" exitCode=0 Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.917267 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhftg" event={"ID":"3e306ddf-071d-47f2-b9b1-bf772963438e","Type":"ContainerDied","Data":"83d0a00bbb287f8c717cb0e93e56c8a769b62fbe8a1114585fcf0819cddb1d85"} Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.919672 4824 generic.go:334] "Generic (PLEG): container finished" podID="2da73289-3f96-4828-a106-46c3b0469e7d" containerID="f3129bb41cd26ff02fd1b16661272cba00c8572d524dd0295795e4e681de10f0" exitCode=0 Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.935457 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl27t" event={"ID":"2da73289-3f96-4828-a106-46c3b0469e7d","Type":"ContainerDied","Data":"f3129bb41cd26ff02fd1b16661272cba00c8572d524dd0295795e4e681de10f0"} Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.948115 4824 generic.go:334] "Generic (PLEG): container finished" podID="7a78c7d6-6ec6-4857-af87-25c5c8cf961d" containerID="da0889e558f40879c1fabc025b056771135a794508cda04386b8dc975f871518" exitCode=0 Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.948265 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxplg" event={"ID":"7a78c7d6-6ec6-4857-af87-25c5c8cf961d","Type":"ContainerDied","Data":"da0889e558f40879c1fabc025b056771135a794508cda04386b8dc975f871518"} Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.970838 4824 generic.go:334] "Generic (PLEG): container finished" podID="b142d96b-87c3-444b-b135-fdddaa658234" containerID="d8cb34947ec733a964cff732c9bb70c2d8c98ea3a605270b5ec9f8c81b631a37" exitCode=0 Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.970939 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzqwf" event={"ID":"b142d96b-87c3-444b-b135-fdddaa658234","Type":"ContainerDied","Data":"d8cb34947ec733a964cff732c9bb70c2d8c98ea3a605270b5ec9f8c81b631a37"} Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.977285 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxnlm\" (UniqueName: \"kubernetes.io/projected/cc119514-5c95-4925-8a1a-3e6844a34e1e-kube-api-access-bxnlm\") pod \"cc119514-5c95-4925-8a1a-3e6844a34e1e\" (UID: \"cc119514-5c95-4925-8a1a-3e6844a34e1e\") " Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.977388 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc119514-5c95-4925-8a1a-3e6844a34e1e-utilities\") pod \"cc119514-5c95-4925-8a1a-3e6844a34e1e\" (UID: \"cc119514-5c95-4925-8a1a-3e6844a34e1e\") " Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.977487 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc119514-5c95-4925-8a1a-3e6844a34e1e-catalog-content\") pod \"cc119514-5c95-4925-8a1a-3e6844a34e1e\" (UID: \"cc119514-5c95-4925-8a1a-3e6844a34e1e\") " Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.982811 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc119514-5c95-4925-8a1a-3e6844a34e1e-utilities" (OuterVolumeSpecName: "utilities") pod "cc119514-5c95-4925-8a1a-3e6844a34e1e" (UID: "cc119514-5c95-4925-8a1a-3e6844a34e1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.987146 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc119514-5c95-4925-8a1a-3e6844a34e1e-kube-api-access-bxnlm" (OuterVolumeSpecName: "kube-api-access-bxnlm") pod "cc119514-5c95-4925-8a1a-3e6844a34e1e" (UID: "cc119514-5c95-4925-8a1a-3e6844a34e1e"). InnerVolumeSpecName "kube-api-access-bxnlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.988894 4824 generic.go:334] "Generic (PLEG): container finished" podID="e312a49f-dc7a-49fc-9baf-3105fec587ae" containerID="1f7f84523e39d2e74db2895c5b1819295512a987f6083e74a45c4c25f78e706d" exitCode=0 Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.988998 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" event={"ID":"e312a49f-dc7a-49fc-9baf-3105fec587ae","Type":"ContainerDied","Data":"1f7f84523e39d2e74db2895c5b1819295512a987f6083e74a45c4c25f78e706d"} Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.994158 4824 generic.go:334] "Generic (PLEG): container finished" podID="cc119514-5c95-4925-8a1a-3e6844a34e1e" containerID="ace099e248827a099f2f9964ea688049a17101eb26b429197dc321a43c82c971" exitCode=0 Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.994201 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmjz7" event={"ID":"cc119514-5c95-4925-8a1a-3e6844a34e1e","Type":"ContainerDied","Data":"ace099e248827a099f2f9964ea688049a17101eb26b429197dc321a43c82c971"} Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.994226 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dmjz7" event={"ID":"cc119514-5c95-4925-8a1a-3e6844a34e1e","Type":"ContainerDied","Data":"42b124ba705dc951f666837537c3a14e76c91f608879722e252c98578703a4ac"} Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.994246 4824 scope.go:117] "RemoveContainer" containerID="ace099e248827a099f2f9964ea688049a17101eb26b429197dc321a43c82c971" Feb 24 00:11:50 crc kubenswrapper[4824]: I0224 00:11:50.994419 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dmjz7" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.000830 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hhftg" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.003957 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.018048 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gl27t" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.021074 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxplg" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.028846 4824 scope.go:117] "RemoveContainer" containerID="49dee3eafa362f85cb1a0da9a797f4ffd8f10b2e2e1b307ecd8002d23b24ca9f" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.046036 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nzqwf" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.046369 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc119514-5c95-4925-8a1a-3e6844a34e1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc119514-5c95-4925-8a1a-3e6844a34e1e" (UID: "cc119514-5c95-4925-8a1a-3e6844a34e1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.059169 4824 scope.go:117] "RemoveContainer" containerID="0a0ed56ec1c248b7e9a29be5b115fbce61e3aaf9d9d45ae0ef4e45bef08de9cc" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.078333 4824 scope.go:117] "RemoveContainer" containerID="ace099e248827a099f2f9964ea688049a17101eb26b429197dc321a43c82c971" Feb 24 00:11:51 crc kubenswrapper[4824]: E0224 00:11:51.078898 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace099e248827a099f2f9964ea688049a17101eb26b429197dc321a43c82c971\": container with ID starting with ace099e248827a099f2f9964ea688049a17101eb26b429197dc321a43c82c971 not found: ID does not exist" containerID="ace099e248827a099f2f9964ea688049a17101eb26b429197dc321a43c82c971" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.078958 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ace099e248827a099f2f9964ea688049a17101eb26b429197dc321a43c82c971"} err="failed to get container status \"ace099e248827a099f2f9964ea688049a17101eb26b429197dc321a43c82c971\": rpc error: code = NotFound desc = could not find container \"ace099e248827a099f2f9964ea688049a17101eb26b429197dc321a43c82c971\": container with ID starting with ace099e248827a099f2f9964ea688049a17101eb26b429197dc321a43c82c971 not found: ID does not exist" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.079080 4824 scope.go:117] "RemoveContainer" containerID="49dee3eafa362f85cb1a0da9a797f4ffd8f10b2e2e1b307ecd8002d23b24ca9f" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.079704 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f76k\" (UniqueName: \"kubernetes.io/projected/e312a49f-dc7a-49fc-9baf-3105fec587ae-kube-api-access-6f76k\") pod \"e312a49f-dc7a-49fc-9baf-3105fec587ae\" (UID: \"e312a49f-dc7a-49fc-9baf-3105fec587ae\") " Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.079854 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e306ddf-071d-47f2-b9b1-bf772963438e-catalog-content\") pod \"3e306ddf-071d-47f2-b9b1-bf772963438e\" (UID: \"3e306ddf-071d-47f2-b9b1-bf772963438e\") " Feb 24 00:11:51 crc kubenswrapper[4824]: E0224 00:11:51.079881 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49dee3eafa362f85cb1a0da9a797f4ffd8f10b2e2e1b307ecd8002d23b24ca9f\": container with ID starting with 49dee3eafa362f85cb1a0da9a797f4ffd8f10b2e2e1b307ecd8002d23b24ca9f not found: ID does not exist" containerID="49dee3eafa362f85cb1a0da9a797f4ffd8f10b2e2e1b307ecd8002d23b24ca9f" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.079911 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49dee3eafa362f85cb1a0da9a797f4ffd8f10b2e2e1b307ecd8002d23b24ca9f"} err="failed to get container status \"49dee3eafa362f85cb1a0da9a797f4ffd8f10b2e2e1b307ecd8002d23b24ca9f\": rpc error: code = NotFound desc = could not find container \"49dee3eafa362f85cb1a0da9a797f4ffd8f10b2e2e1b307ecd8002d23b24ca9f\": container with ID starting with 49dee3eafa362f85cb1a0da9a797f4ffd8f10b2e2e1b307ecd8002d23b24ca9f not found: ID does not exist" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.079930 4824 scope.go:117] "RemoveContainer" containerID="0a0ed56ec1c248b7e9a29be5b115fbce61e3aaf9d9d45ae0ef4e45bef08de9cc" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.079932 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e312a49f-dc7a-49fc-9baf-3105fec587ae-marketplace-trusted-ca\") pod \"e312a49f-dc7a-49fc-9baf-3105fec587ae\" (UID: \"e312a49f-dc7a-49fc-9baf-3105fec587ae\") " Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.080036 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e312a49f-dc7a-49fc-9baf-3105fec587ae-marketplace-operator-metrics\") pod \"e312a49f-dc7a-49fc-9baf-3105fec587ae\" (UID: \"e312a49f-dc7a-49fc-9baf-3105fec587ae\") " Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.080117 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e306ddf-071d-47f2-b9b1-bf772963438e-utilities\") pod \"3e306ddf-071d-47f2-b9b1-bf772963438e\" (UID: \"3e306ddf-071d-47f2-b9b1-bf772963438e\") " Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.080169 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndbcf\" (UniqueName: \"kubernetes.io/projected/3e306ddf-071d-47f2-b9b1-bf772963438e-kube-api-access-ndbcf\") pod \"3e306ddf-071d-47f2-b9b1-bf772963438e\" (UID: \"3e306ddf-071d-47f2-b9b1-bf772963438e\") " Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.080763 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e312a49f-dc7a-49fc-9baf-3105fec587ae-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e312a49f-dc7a-49fc-9baf-3105fec587ae" (UID: "e312a49f-dc7a-49fc-9baf-3105fec587ae"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.080887 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc119514-5c95-4925-8a1a-3e6844a34e1e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.080918 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxnlm\" (UniqueName: \"kubernetes.io/projected/cc119514-5c95-4925-8a1a-3e6844a34e1e-kube-api-access-bxnlm\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.080934 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc119514-5c95-4925-8a1a-3e6844a34e1e-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.080946 4824 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e312a49f-dc7a-49fc-9baf-3105fec587ae-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.081961 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e306ddf-071d-47f2-b9b1-bf772963438e-utilities" (OuterVolumeSpecName: "utilities") pod "3e306ddf-071d-47f2-b9b1-bf772963438e" (UID: "3e306ddf-071d-47f2-b9b1-bf772963438e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:11:51 crc kubenswrapper[4824]: E0224 00:11:51.082614 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a0ed56ec1c248b7e9a29be5b115fbce61e3aaf9d9d45ae0ef4e45bef08de9cc\": container with ID starting with 0a0ed56ec1c248b7e9a29be5b115fbce61e3aaf9d9d45ae0ef4e45bef08de9cc not found: ID does not exist" containerID="0a0ed56ec1c248b7e9a29be5b115fbce61e3aaf9d9d45ae0ef4e45bef08de9cc" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.082703 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a0ed56ec1c248b7e9a29be5b115fbce61e3aaf9d9d45ae0ef4e45bef08de9cc"} err="failed to get container status \"0a0ed56ec1c248b7e9a29be5b115fbce61e3aaf9d9d45ae0ef4e45bef08de9cc\": rpc error: code = NotFound desc = could not find container \"0a0ed56ec1c248b7e9a29be5b115fbce61e3aaf9d9d45ae0ef4e45bef08de9cc\": container with ID starting with 0a0ed56ec1c248b7e9a29be5b115fbce61e3aaf9d9d45ae0ef4e45bef08de9cc not found: ID does not exist" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.083555 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e312a49f-dc7a-49fc-9baf-3105fec587ae-kube-api-access-6f76k" (OuterVolumeSpecName: "kube-api-access-6f76k") pod "e312a49f-dc7a-49fc-9baf-3105fec587ae" (UID: "e312a49f-dc7a-49fc-9baf-3105fec587ae"). InnerVolumeSpecName "kube-api-access-6f76k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.084760 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e312a49f-dc7a-49fc-9baf-3105fec587ae-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e312a49f-dc7a-49fc-9baf-3105fec587ae" (UID: "e312a49f-dc7a-49fc-9baf-3105fec587ae"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.086150 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e306ddf-071d-47f2-b9b1-bf772963438e-kube-api-access-ndbcf" (OuterVolumeSpecName: "kube-api-access-ndbcf") pod "3e306ddf-071d-47f2-b9b1-bf772963438e" (UID: "3e306ddf-071d-47f2-b9b1-bf772963438e"). InnerVolumeSpecName "kube-api-access-ndbcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.118472 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.135452 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.154340 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e306ddf-071d-47f2-b9b1-bf772963438e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e306ddf-071d-47f2-b9b1-bf772963438e" (UID: "3e306ddf-071d-47f2-b9b1-bf772963438e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.181664 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc6ds\" (UniqueName: \"kubernetes.io/projected/7a78c7d6-6ec6-4857-af87-25c5c8cf961d-kube-api-access-zc6ds\") pod \"7a78c7d6-6ec6-4857-af87-25c5c8cf961d\" (UID: \"7a78c7d6-6ec6-4857-af87-25c5c8cf961d\") " Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.181716 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hggqz\" (UniqueName: \"kubernetes.io/projected/b142d96b-87c3-444b-b135-fdddaa658234-kube-api-access-hggqz\") pod \"b142d96b-87c3-444b-b135-fdddaa658234\" (UID: \"b142d96b-87c3-444b-b135-fdddaa658234\") " Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.181788 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a78c7d6-6ec6-4857-af87-25c5c8cf961d-catalog-content\") pod \"7a78c7d6-6ec6-4857-af87-25c5c8cf961d\" (UID: \"7a78c7d6-6ec6-4857-af87-25c5c8cf961d\") " Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.181835 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fhvh\" (UniqueName: \"kubernetes.io/projected/2da73289-3f96-4828-a106-46c3b0469e7d-kube-api-access-7fhvh\") pod \"2da73289-3f96-4828-a106-46c3b0469e7d\" (UID: \"2da73289-3f96-4828-a106-46c3b0469e7d\") " Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.181861 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b142d96b-87c3-444b-b135-fdddaa658234-utilities\") pod \"b142d96b-87c3-444b-b135-fdddaa658234\" (UID: \"b142d96b-87c3-444b-b135-fdddaa658234\") " Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.181883 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a78c7d6-6ec6-4857-af87-25c5c8cf961d-utilities\") pod \"7a78c7d6-6ec6-4857-af87-25c5c8cf961d\" (UID: \"7a78c7d6-6ec6-4857-af87-25c5c8cf961d\") " Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.181918 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2da73289-3f96-4828-a106-46c3b0469e7d-utilities\") pod \"2da73289-3f96-4828-a106-46c3b0469e7d\" (UID: \"2da73289-3f96-4828-a106-46c3b0469e7d\") " Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.181936 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2da73289-3f96-4828-a106-46c3b0469e7d-catalog-content\") pod \"2da73289-3f96-4828-a106-46c3b0469e7d\" (UID: \"2da73289-3f96-4828-a106-46c3b0469e7d\") " Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.181970 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b142d96b-87c3-444b-b135-fdddaa658234-catalog-content\") pod \"b142d96b-87c3-444b-b135-fdddaa658234\" (UID: \"b142d96b-87c3-444b-b135-fdddaa658234\") " Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.182220 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e306ddf-071d-47f2-b9b1-bf772963438e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.182232 4824 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e312a49f-dc7a-49fc-9baf-3105fec587ae-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.182242 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e306ddf-071d-47f2-b9b1-bf772963438e-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.182252 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndbcf\" (UniqueName: \"kubernetes.io/projected/3e306ddf-071d-47f2-b9b1-bf772963438e-kube-api-access-ndbcf\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.182263 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f76k\" (UniqueName: \"kubernetes.io/projected/e312a49f-dc7a-49fc-9baf-3105fec587ae-kube-api-access-6f76k\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.183195 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b142d96b-87c3-444b-b135-fdddaa658234-utilities" (OuterVolumeSpecName: "utilities") pod "b142d96b-87c3-444b-b135-fdddaa658234" (UID: "b142d96b-87c3-444b-b135-fdddaa658234"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.183408 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a78c7d6-6ec6-4857-af87-25c5c8cf961d-utilities" (OuterVolumeSpecName: "utilities") pod "7a78c7d6-6ec6-4857-af87-25c5c8cf961d" (UID: "7a78c7d6-6ec6-4857-af87-25c5c8cf961d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.183478 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2da73289-3f96-4828-a106-46c3b0469e7d-utilities" (OuterVolumeSpecName: "utilities") pod "2da73289-3f96-4828-a106-46c3b0469e7d" (UID: "2da73289-3f96-4828-a106-46c3b0469e7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.185315 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b142d96b-87c3-444b-b135-fdddaa658234-kube-api-access-hggqz" (OuterVolumeSpecName: "kube-api-access-hggqz") pod "b142d96b-87c3-444b-b135-fdddaa658234" (UID: "b142d96b-87c3-444b-b135-fdddaa658234"). InnerVolumeSpecName "kube-api-access-hggqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.185479 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da73289-3f96-4828-a106-46c3b0469e7d-kube-api-access-7fhvh" (OuterVolumeSpecName: "kube-api-access-7fhvh") pod "2da73289-3f96-4828-a106-46c3b0469e7d" (UID: "2da73289-3f96-4828-a106-46c3b0469e7d"). InnerVolumeSpecName "kube-api-access-7fhvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.196287 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a78c7d6-6ec6-4857-af87-25c5c8cf961d-kube-api-access-zc6ds" (OuterVolumeSpecName: "kube-api-access-zc6ds") pod "7a78c7d6-6ec6-4857-af87-25c5c8cf961d" (UID: "7a78c7d6-6ec6-4857-af87-25c5c8cf961d"). InnerVolumeSpecName "kube-api-access-zc6ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.205643 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b142d96b-87c3-444b-b135-fdddaa658234-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b142d96b-87c3-444b-b135-fdddaa658234" (UID: "b142d96b-87c3-444b-b135-fdddaa658234"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.283633 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fhvh\" (UniqueName: \"kubernetes.io/projected/2da73289-3f96-4828-a106-46c3b0469e7d-kube-api-access-7fhvh\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.283683 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b142d96b-87c3-444b-b135-fdddaa658234-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.283695 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a78c7d6-6ec6-4857-af87-25c5c8cf961d-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.283703 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2da73289-3f96-4828-a106-46c3b0469e7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.283712 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b142d96b-87c3-444b-b135-fdddaa658234-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.283724 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hggqz\" (UniqueName: \"kubernetes.io/projected/b142d96b-87c3-444b-b135-fdddaa658234-kube-api-access-hggqz\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.283734 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc6ds\" (UniqueName: \"kubernetes.io/projected/7a78c7d6-6ec6-4857-af87-25c5c8cf961d-kube-api-access-zc6ds\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.324157 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2da73289-3f96-4828-a106-46c3b0469e7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2da73289-3f96-4828-a106-46c3b0469e7d" (UID: "2da73289-3f96-4828-a106-46c3b0469e7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.334972 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dmjz7"] Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.338256 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dmjz7"] Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.347483 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a78c7d6-6ec6-4857-af87-25c5c8cf961d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a78c7d6-6ec6-4857-af87-25c5c8cf961d" (UID: "7a78c7d6-6ec6-4857-af87-25c5c8cf961d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.384463 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a78c7d6-6ec6-4857-af87-25c5c8cf961d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:51 crc kubenswrapper[4824]: I0224 00:11:51.384838 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2da73289-3f96-4828-a106-46c3b0469e7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.001503 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hhftg" event={"ID":"3e306ddf-071d-47f2-b9b1-bf772963438e","Type":"ContainerDied","Data":"24c650baf1648fdbc140def26b06acbc896c72aa2095332a4a2cc286bdf3cc0c"} Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.001622 4824 scope.go:117] "RemoveContainer" containerID="83d0a00bbb287f8c717cb0e93e56c8a769b62fbe8a1114585fcf0819cddb1d85" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.001550 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hhftg" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.003982 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl27t" event={"ID":"2da73289-3f96-4828-a106-46c3b0469e7d","Type":"ContainerDied","Data":"2503a134b22274bc6e70e9fb4c998a82c8a291a8ce5041c5a448cbf0b7c362a7"} Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.003994 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gl27t" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.007052 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zxplg" event={"ID":"7a78c7d6-6ec6-4857-af87-25c5c8cf961d","Type":"ContainerDied","Data":"6ef8798cf5f3aadb98a5ae1d2d3bf34bb35cf168ac8076ee6ba9bc741a06b98b"} Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.007124 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zxplg" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.010179 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzqwf" event={"ID":"b142d96b-87c3-444b-b135-fdddaa658234","Type":"ContainerDied","Data":"6087d7cb108c4772f5476645e00887a465effb8e262d89a746313bbbb9fb34f8"} Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.010254 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nzqwf" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.013495 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" event={"ID":"e312a49f-dc7a-49fc-9baf-3105fec587ae","Type":"ContainerDied","Data":"1e7b695fbb51788dd119d9e0ae76024be2038ddb563a00cf87c9d5c4544df61f"} Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.013596 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-99tkw" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.017242 4824 scope.go:117] "RemoveContainer" containerID="4336adaefce1f631229f06eda9fede5b34bd7e94028955471812962455639142" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.045888 4824 scope.go:117] "RemoveContainer" containerID="165f557a643df29a5f3055b0f6055d2350a6f07b3c59175faba79784672bcb83" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.053406 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gl27t"] Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.057842 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gl27t"] Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.073549 4824 scope.go:117] "RemoveContainer" containerID="f3129bb41cd26ff02fd1b16661272cba00c8572d524dd0295795e4e681de10f0" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.074738 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zxplg"] Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.078487 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zxplg"] Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.085732 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hhftg"] Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.092300 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hhftg"] Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.096790 4824 scope.go:117] "RemoveContainer" containerID="09e81517976ec38b505938bb2df2f3b6123c4b30e798621cb83825dcef2c35b1" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.102976 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nzqwf"] Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.106611 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nzqwf"] Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.112130 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-99tkw"] Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.115348 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-99tkw"] Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.128593 4824 scope.go:117] "RemoveContainer" containerID="92570d872625fe189d1225ae3cfcceb0efc1931cef5c4ee603139bb405c9eff3" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.143322 4824 scope.go:117] "RemoveContainer" containerID="da0889e558f40879c1fabc025b056771135a794508cda04386b8dc975f871518" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.157374 4824 scope.go:117] "RemoveContainer" containerID="ae3090316a207f659563cb6daa67a1cc4d3c280950cb420d2cf6d0ddebc465d5" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.172872 4824 scope.go:117] "RemoveContainer" containerID="05173dd075227354e5c8172cf583a8c34fd894215338d07f6c1a9644348f85b0" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.183849 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.185756 4824 scope.go:117] "RemoveContainer" containerID="d8cb34947ec733a964cff732c9bb70c2d8c98ea3a605270b5ec9f8c81b631a37" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.201752 4824 scope.go:117] "RemoveContainer" containerID="8bd5382363dfe954b11d2958183ea67ba5ab63752a6364c784c6c9e09c7286e0" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.218878 4824 scope.go:117] "RemoveContainer" containerID="5af3da4115b49b00d3bb13283e7fccd617f9a8fbd1e5c6782e319a1b0a15e513" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.241896 4824 scope.go:117] "RemoveContainer" containerID="1f7f84523e39d2e74db2895c5b1819295512a987f6083e74a45c4c25f78e706d" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.369668 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.615026 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.701408 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2da73289-3f96-4828-a106-46c3b0469e7d" path="/var/lib/kubelet/pods/2da73289-3f96-4828-a106-46c3b0469e7d/volumes" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.702118 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e306ddf-071d-47f2-b9b1-bf772963438e" path="/var/lib/kubelet/pods/3e306ddf-071d-47f2-b9b1-bf772963438e/volumes" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.702961 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a78c7d6-6ec6-4857-af87-25c5c8cf961d" path="/var/lib/kubelet/pods/7a78c7d6-6ec6-4857-af87-25c5c8cf961d/volumes" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.704264 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b142d96b-87c3-444b-b135-fdddaa658234" path="/var/lib/kubelet/pods/b142d96b-87c3-444b-b135-fdddaa658234/volumes" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.704991 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc119514-5c95-4925-8a1a-3e6844a34e1e" path="/var/lib/kubelet/pods/cc119514-5c95-4925-8a1a-3e6844a34e1e/volumes" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.706252 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e312a49f-dc7a-49fc-9baf-3105fec587ae" path="/var/lib/kubelet/pods/e312a49f-dc7a-49fc-9baf-3105fec587ae/volumes" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.793141 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.793237 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.903990 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.904098 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.904256 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.904286 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.904327 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.904374 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.904503 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.904589 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.904650 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.904897 4824 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.904914 4824 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.904925 4824 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.904937 4824 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:52 crc kubenswrapper[4824]: I0224 00:11:52.914428 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:11:53 crc kubenswrapper[4824]: I0224 00:11:53.006277 4824 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:53 crc kubenswrapper[4824]: I0224 00:11:53.028927 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 24 00:11:53 crc kubenswrapper[4824]: I0224 00:11:53.028991 4824 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="1491c4b381f60602c7a24a8e5a735a39ea625073883a8aa7fd3568c56da3a7e7" exitCode=137 Feb 24 00:11:53 crc kubenswrapper[4824]: I0224 00:11:53.029100 4824 scope.go:117] "RemoveContainer" containerID="1491c4b381f60602c7a24a8e5a735a39ea625073883a8aa7fd3568c56da3a7e7" Feb 24 00:11:53 crc kubenswrapper[4824]: I0224 00:11:53.029348 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:11:53 crc kubenswrapper[4824]: I0224 00:11:53.049882 4824 scope.go:117] "RemoveContainer" containerID="1491c4b381f60602c7a24a8e5a735a39ea625073883a8aa7fd3568c56da3a7e7" Feb 24 00:11:53 crc kubenswrapper[4824]: E0224 00:11:53.050805 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1491c4b381f60602c7a24a8e5a735a39ea625073883a8aa7fd3568c56da3a7e7\": container with ID starting with 1491c4b381f60602c7a24a8e5a735a39ea625073883a8aa7fd3568c56da3a7e7 not found: ID does not exist" containerID="1491c4b381f60602c7a24a8e5a735a39ea625073883a8aa7fd3568c56da3a7e7" Feb 24 00:11:53 crc kubenswrapper[4824]: I0224 00:11:53.050873 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1491c4b381f60602c7a24a8e5a735a39ea625073883a8aa7fd3568c56da3a7e7"} err="failed to get container status \"1491c4b381f60602c7a24a8e5a735a39ea625073883a8aa7fd3568c56da3a7e7\": rpc error: code = NotFound desc = could not find container \"1491c4b381f60602c7a24a8e5a735a39ea625073883a8aa7fd3568c56da3a7e7\": container with ID starting with 1491c4b381f60602c7a24a8e5a735a39ea625073883a8aa7fd3568c56da3a7e7 not found: ID does not exist" Feb 24 00:11:54 crc kubenswrapper[4824]: I0224 00:11:54.700879 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.193912 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9t5cw"] Feb 24 00:12:16 crc kubenswrapper[4824]: E0224 00:12:16.194731 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da73289-3f96-4828-a106-46c3b0469e7d" containerName="registry-server" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.194747 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da73289-3f96-4828-a106-46c3b0469e7d" containerName="registry-server" Feb 24 00:12:16 crc kubenswrapper[4824]: E0224 00:12:16.194759 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a78c7d6-6ec6-4857-af87-25c5c8cf961d" containerName="extract-content" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.194768 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a78c7d6-6ec6-4857-af87-25c5c8cf961d" containerName="extract-content" Feb 24 00:12:16 crc kubenswrapper[4824]: E0224 00:12:16.194780 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="466928f3-88e1-4111-8358-13db2bd5ba58" containerName="installer" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.194790 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="466928f3-88e1-4111-8358-13db2bd5ba58" containerName="installer" Feb 24 00:12:16 crc kubenswrapper[4824]: E0224 00:12:16.194800 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a78c7d6-6ec6-4857-af87-25c5c8cf961d" containerName="extract-utilities" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.194809 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a78c7d6-6ec6-4857-af87-25c5c8cf961d" containerName="extract-utilities" Feb 24 00:12:16 crc kubenswrapper[4824]: E0224 00:12:16.194819 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da73289-3f96-4828-a106-46c3b0469e7d" containerName="extract-content" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.194827 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da73289-3f96-4828-a106-46c3b0469e7d" containerName="extract-content" Feb 24 00:12:16 crc kubenswrapper[4824]: E0224 00:12:16.194837 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e306ddf-071d-47f2-b9b1-bf772963438e" containerName="registry-server" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.194845 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e306ddf-071d-47f2-b9b1-bf772963438e" containerName="registry-server" Feb 24 00:12:16 crc kubenswrapper[4824]: E0224 00:12:16.194857 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b142d96b-87c3-444b-b135-fdddaa658234" containerName="registry-server" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.194865 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="b142d96b-87c3-444b-b135-fdddaa658234" containerName="registry-server" Feb 24 00:12:16 crc kubenswrapper[4824]: E0224 00:12:16.194876 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a78c7d6-6ec6-4857-af87-25c5c8cf961d" containerName="registry-server" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.194884 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a78c7d6-6ec6-4857-af87-25c5c8cf961d" containerName="registry-server" Feb 24 00:12:16 crc kubenswrapper[4824]: E0224 00:12:16.194898 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.194906 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 24 00:12:16 crc kubenswrapper[4824]: E0224 00:12:16.194919 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b142d96b-87c3-444b-b135-fdddaa658234" containerName="extract-utilities" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.194928 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="b142d96b-87c3-444b-b135-fdddaa658234" containerName="extract-utilities" Feb 24 00:12:16 crc kubenswrapper[4824]: E0224 00:12:16.194940 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc119514-5c95-4925-8a1a-3e6844a34e1e" containerName="extract-content" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.194948 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc119514-5c95-4925-8a1a-3e6844a34e1e" containerName="extract-content" Feb 24 00:12:16 crc kubenswrapper[4824]: E0224 00:12:16.194962 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e306ddf-071d-47f2-b9b1-bf772963438e" containerName="extract-utilities" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.194970 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e306ddf-071d-47f2-b9b1-bf772963438e" containerName="extract-utilities" Feb 24 00:12:16 crc kubenswrapper[4824]: E0224 00:12:16.194981 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc119514-5c95-4925-8a1a-3e6844a34e1e" containerName="extract-utilities" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.194991 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc119514-5c95-4925-8a1a-3e6844a34e1e" containerName="extract-utilities" Feb 24 00:12:16 crc kubenswrapper[4824]: E0224 00:12:16.195001 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e306ddf-071d-47f2-b9b1-bf772963438e" containerName="extract-content" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.195009 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e306ddf-071d-47f2-b9b1-bf772963438e" containerName="extract-content" Feb 24 00:12:16 crc kubenswrapper[4824]: E0224 00:12:16.195019 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc119514-5c95-4925-8a1a-3e6844a34e1e" containerName="registry-server" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.195027 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc119514-5c95-4925-8a1a-3e6844a34e1e" containerName="registry-server" Feb 24 00:12:16 crc kubenswrapper[4824]: E0224 00:12:16.195039 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e312a49f-dc7a-49fc-9baf-3105fec587ae" containerName="marketplace-operator" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.195047 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="e312a49f-dc7a-49fc-9baf-3105fec587ae" containerName="marketplace-operator" Feb 24 00:12:16 crc kubenswrapper[4824]: E0224 00:12:16.195063 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da73289-3f96-4828-a106-46c3b0469e7d" containerName="extract-utilities" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.195071 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da73289-3f96-4828-a106-46c3b0469e7d" containerName="extract-utilities" Feb 24 00:12:16 crc kubenswrapper[4824]: E0224 00:12:16.195083 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b142d96b-87c3-444b-b135-fdddaa658234" containerName="extract-content" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.195092 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="b142d96b-87c3-444b-b135-fdddaa658234" containerName="extract-content" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.195230 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da73289-3f96-4828-a106-46c3b0469e7d" containerName="registry-server" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.195244 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="e312a49f-dc7a-49fc-9baf-3105fec587ae" containerName="marketplace-operator" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.195256 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e306ddf-071d-47f2-b9b1-bf772963438e" containerName="registry-server" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.195269 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc119514-5c95-4925-8a1a-3e6844a34e1e" containerName="registry-server" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.195281 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="b142d96b-87c3-444b-b135-fdddaa658234" containerName="registry-server" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.195296 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.195310 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="466928f3-88e1-4111-8358-13db2bd5ba58" containerName="installer" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.195323 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a78c7d6-6ec6-4857-af87-25c5c8cf961d" containerName="registry-server" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.197217 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t5cw" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.208852 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.209038 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.209169 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.221019 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da3bd34-bc43-4c9d-a974-a131ad945913-utilities\") pod \"community-operators-9t5cw\" (UID: \"9da3bd34-bc43-4c9d-a974-a131ad945913\") " pod="openshift-marketplace/community-operators-9t5cw" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.221115 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrvss\" (UniqueName: \"kubernetes.io/projected/9da3bd34-bc43-4c9d-a974-a131ad945913-kube-api-access-jrvss\") pod \"community-operators-9t5cw\" (UID: \"9da3bd34-bc43-4c9d-a974-a131ad945913\") " pod="openshift-marketplace/community-operators-9t5cw" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.221164 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da3bd34-bc43-4c9d-a974-a131ad945913-catalog-content\") pod \"community-operators-9t5cw\" (UID: \"9da3bd34-bc43-4c9d-a974-a131ad945913\") " pod="openshift-marketplace/community-operators-9t5cw" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.222436 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9t5cw"] Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.321822 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da3bd34-bc43-4c9d-a974-a131ad945913-catalog-content\") pod \"community-operators-9t5cw\" (UID: \"9da3bd34-bc43-4c9d-a974-a131ad945913\") " pod="openshift-marketplace/community-operators-9t5cw" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.321896 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da3bd34-bc43-4c9d-a974-a131ad945913-utilities\") pod \"community-operators-9t5cw\" (UID: \"9da3bd34-bc43-4c9d-a974-a131ad945913\") " pod="openshift-marketplace/community-operators-9t5cw" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.322439 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrvss\" (UniqueName: \"kubernetes.io/projected/9da3bd34-bc43-4c9d-a974-a131ad945913-kube-api-access-jrvss\") pod \"community-operators-9t5cw\" (UID: \"9da3bd34-bc43-4c9d-a974-a131ad945913\") " pod="openshift-marketplace/community-operators-9t5cw" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.322539 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da3bd34-bc43-4c9d-a974-a131ad945913-utilities\") pod \"community-operators-9t5cw\" (UID: \"9da3bd34-bc43-4c9d-a974-a131ad945913\") " pod="openshift-marketplace/community-operators-9t5cw" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.322894 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da3bd34-bc43-4c9d-a974-a131ad945913-catalog-content\") pod \"community-operators-9t5cw\" (UID: \"9da3bd34-bc43-4c9d-a974-a131ad945913\") " pod="openshift-marketplace/community-operators-9t5cw" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.342567 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrvss\" (UniqueName: \"kubernetes.io/projected/9da3bd34-bc43-4c9d-a974-a131ad945913-kube-api-access-jrvss\") pod \"community-operators-9t5cw\" (UID: \"9da3bd34-bc43-4c9d-a974-a131ad945913\") " pod="openshift-marketplace/community-operators-9t5cw" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.534363 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9t5cw" Feb 24 00:12:16 crc kubenswrapper[4824]: I0224 00:12:16.970466 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9t5cw"] Feb 24 00:12:17 crc kubenswrapper[4824]: I0224 00:12:17.164303 4824 generic.go:334] "Generic (PLEG): container finished" podID="9da3bd34-bc43-4c9d-a974-a131ad945913" containerID="2ed26ef799edef0ffd0e95ed37ad0f017318b13d3836a24eab989c21d4c788fa" exitCode=0 Feb 24 00:12:17 crc kubenswrapper[4824]: I0224 00:12:17.164396 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t5cw" event={"ID":"9da3bd34-bc43-4c9d-a974-a131ad945913","Type":"ContainerDied","Data":"2ed26ef799edef0ffd0e95ed37ad0f017318b13d3836a24eab989c21d4c788fa"} Feb 24 00:12:17 crc kubenswrapper[4824]: I0224 00:12:17.164457 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t5cw" event={"ID":"9da3bd34-bc43-4c9d-a974-a131ad945913","Type":"ContainerStarted","Data":"8f3d32031602416c5c08e80ddfc202c716f95da638d1dfebd16062c3d0142dcc"} Feb 24 00:12:17 crc kubenswrapper[4824]: I0224 00:12:17.583650 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9gq54"] Feb 24 00:12:17 crc kubenswrapper[4824]: I0224 00:12:17.584619 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9gq54" Feb 24 00:12:17 crc kubenswrapper[4824]: I0224 00:12:17.587264 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 24 00:12:17 crc kubenswrapper[4824]: I0224 00:12:17.601840 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9gq54"] Feb 24 00:12:17 crc kubenswrapper[4824]: I0224 00:12:17.640874 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxh7r\" (UniqueName: \"kubernetes.io/projected/2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d-kube-api-access-xxh7r\") pod \"certified-operators-9gq54\" (UID: \"2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d\") " pod="openshift-marketplace/certified-operators-9gq54" Feb 24 00:12:17 crc kubenswrapper[4824]: I0224 00:12:17.640926 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d-catalog-content\") pod \"certified-operators-9gq54\" (UID: \"2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d\") " pod="openshift-marketplace/certified-operators-9gq54" Feb 24 00:12:17 crc kubenswrapper[4824]: I0224 00:12:17.641087 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d-utilities\") pod \"certified-operators-9gq54\" (UID: \"2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d\") " pod="openshift-marketplace/certified-operators-9gq54" Feb 24 00:12:17 crc kubenswrapper[4824]: I0224 00:12:17.742237 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d-utilities\") pod \"certified-operators-9gq54\" (UID: \"2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d\") " pod="openshift-marketplace/certified-operators-9gq54" Feb 24 00:12:17 crc kubenswrapper[4824]: I0224 00:12:17.742316 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxh7r\" (UniqueName: \"kubernetes.io/projected/2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d-kube-api-access-xxh7r\") pod \"certified-operators-9gq54\" (UID: \"2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d\") " pod="openshift-marketplace/certified-operators-9gq54" Feb 24 00:12:17 crc kubenswrapper[4824]: I0224 00:12:17.742380 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d-catalog-content\") pod \"certified-operators-9gq54\" (UID: \"2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d\") " pod="openshift-marketplace/certified-operators-9gq54" Feb 24 00:12:17 crc kubenswrapper[4824]: I0224 00:12:17.742886 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d-utilities\") pod \"certified-operators-9gq54\" (UID: \"2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d\") " pod="openshift-marketplace/certified-operators-9gq54" Feb 24 00:12:17 crc kubenswrapper[4824]: I0224 00:12:17.742966 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d-catalog-content\") pod \"certified-operators-9gq54\" (UID: \"2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d\") " pod="openshift-marketplace/certified-operators-9gq54" Feb 24 00:12:17 crc kubenswrapper[4824]: I0224 00:12:17.767816 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxh7r\" (UniqueName: \"kubernetes.io/projected/2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d-kube-api-access-xxh7r\") pod \"certified-operators-9gq54\" (UID: \"2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d\") " pod="openshift-marketplace/certified-operators-9gq54" Feb 24 00:12:17 crc kubenswrapper[4824]: I0224 00:12:17.900427 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9gq54" Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.153688 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9gq54"] Feb 24 00:12:18 crc kubenswrapper[4824]: W0224 00:12:18.158193 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d7ceac8_1cca_49dc_bff6_f6fa38cbfc1d.slice/crio-5a7b263c271ecf10633b1c8eca972c691e190e8eeb3103a58ee1330d194d7bd0 WatchSource:0}: Error finding container 5a7b263c271ecf10633b1c8eca972c691e190e8eeb3103a58ee1330d194d7bd0: Status 404 returned error can't find the container with id 5a7b263c271ecf10633b1c8eca972c691e190e8eeb3103a58ee1330d194d7bd0 Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.172917 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.174386 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.176058 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.176134 4824 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="34eb8fb83da5aca983d6da868242cce539ecbefeda8efd3d70063bb191fa81ec" exitCode=137 Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.176221 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"34eb8fb83da5aca983d6da868242cce539ecbefeda8efd3d70063bb191fa81ec"} Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.176301 4824 scope.go:117] "RemoveContainer" containerID="45746da193a3aaa64e64067e2324afb4e4fb1a90a1357461ed97c0bbb5109f5b" Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.178737 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gq54" event={"ID":"2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d","Type":"ContainerStarted","Data":"5a7b263c271ecf10633b1c8eca972c691e190e8eeb3103a58ee1330d194d7bd0"} Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.183407 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t5cw" event={"ID":"9da3bd34-bc43-4c9d-a974-a131ad945913","Type":"ContainerStarted","Data":"16fc88db0b88666a21a051566414e9cd9655444221b748206f54ed15178554e6"} Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.584948 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-49mft"] Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.585900 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-49mft" Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.587597 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.595965 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-49mft"] Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.759778 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hxsv\" (UniqueName: \"kubernetes.io/projected/a392c527-174d-4f66-a7cd-5f625192f3c7-kube-api-access-5hxsv\") pod \"redhat-marketplace-49mft\" (UID: \"a392c527-174d-4f66-a7cd-5f625192f3c7\") " pod="openshift-marketplace/redhat-marketplace-49mft" Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.759912 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a392c527-174d-4f66-a7cd-5f625192f3c7-utilities\") pod \"redhat-marketplace-49mft\" (UID: \"a392c527-174d-4f66-a7cd-5f625192f3c7\") " pod="openshift-marketplace/redhat-marketplace-49mft" Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.759935 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a392c527-174d-4f66-a7cd-5f625192f3c7-catalog-content\") pod \"redhat-marketplace-49mft\" (UID: \"a392c527-174d-4f66-a7cd-5f625192f3c7\") " pod="openshift-marketplace/redhat-marketplace-49mft" Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.861608 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a392c527-174d-4f66-a7cd-5f625192f3c7-utilities\") pod \"redhat-marketplace-49mft\" (UID: \"a392c527-174d-4f66-a7cd-5f625192f3c7\") " pod="openshift-marketplace/redhat-marketplace-49mft" Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.861674 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a392c527-174d-4f66-a7cd-5f625192f3c7-catalog-content\") pod \"redhat-marketplace-49mft\" (UID: \"a392c527-174d-4f66-a7cd-5f625192f3c7\") " pod="openshift-marketplace/redhat-marketplace-49mft" Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.861798 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hxsv\" (UniqueName: \"kubernetes.io/projected/a392c527-174d-4f66-a7cd-5f625192f3c7-kube-api-access-5hxsv\") pod \"redhat-marketplace-49mft\" (UID: \"a392c527-174d-4f66-a7cd-5f625192f3c7\") " pod="openshift-marketplace/redhat-marketplace-49mft" Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.862636 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a392c527-174d-4f66-a7cd-5f625192f3c7-utilities\") pod \"redhat-marketplace-49mft\" (UID: \"a392c527-174d-4f66-a7cd-5f625192f3c7\") " pod="openshift-marketplace/redhat-marketplace-49mft" Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.862866 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a392c527-174d-4f66-a7cd-5f625192f3c7-catalog-content\") pod \"redhat-marketplace-49mft\" (UID: \"a392c527-174d-4f66-a7cd-5f625192f3c7\") " pod="openshift-marketplace/redhat-marketplace-49mft" Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.886033 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hxsv\" (UniqueName: \"kubernetes.io/projected/a392c527-174d-4f66-a7cd-5f625192f3c7-kube-api-access-5hxsv\") pod \"redhat-marketplace-49mft\" (UID: \"a392c527-174d-4f66-a7cd-5f625192f3c7\") " pod="openshift-marketplace/redhat-marketplace-49mft" Feb 24 00:12:18 crc kubenswrapper[4824]: I0224 00:12:18.905838 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-49mft" Feb 24 00:12:19 crc kubenswrapper[4824]: I0224 00:12:19.191814 4824 generic.go:334] "Generic (PLEG): container finished" podID="2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d" containerID="7d1531cf7ef51dd43c129802fe3e10d3c81e1c31d5c3eee8dca4bb27d6f84300" exitCode=0 Feb 24 00:12:19 crc kubenswrapper[4824]: I0224 00:12:19.191916 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gq54" event={"ID":"2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d","Type":"ContainerDied","Data":"7d1531cf7ef51dd43c129802fe3e10d3c81e1c31d5c3eee8dca4bb27d6f84300"} Feb 24 00:12:19 crc kubenswrapper[4824]: I0224 00:12:19.197445 4824 generic.go:334] "Generic (PLEG): container finished" podID="9da3bd34-bc43-4c9d-a974-a131ad945913" containerID="16fc88db0b88666a21a051566414e9cd9655444221b748206f54ed15178554e6" exitCode=0 Feb 24 00:12:19 crc kubenswrapper[4824]: I0224 00:12:19.197557 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t5cw" event={"ID":"9da3bd34-bc43-4c9d-a974-a131ad945913","Type":"ContainerDied","Data":"16fc88db0b88666a21a051566414e9cd9655444221b748206f54ed15178554e6"} Feb 24 00:12:19 crc kubenswrapper[4824]: I0224 00:12:19.204122 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 24 00:12:19 crc kubenswrapper[4824]: I0224 00:12:19.205935 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 24 00:12:19 crc kubenswrapper[4824]: I0224 00:12:19.207009 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cd788104ee5073679e89d6716044c8f9b1bc2b1b2f1e8430a5c80eac94b1bc14"} Feb 24 00:12:19 crc kubenswrapper[4824]: I0224 00:12:19.306809 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-49mft"] Feb 24 00:12:19 crc kubenswrapper[4824]: W0224 00:12:19.318229 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda392c527_174d_4f66_a7cd_5f625192f3c7.slice/crio-c3b19531b333b7d03df785c5e3fd25d1eeba8ccb1da22f7e993d8082b132b9a9 WatchSource:0}: Error finding container c3b19531b333b7d03df785c5e3fd25d1eeba8ccb1da22f7e993d8082b132b9a9: Status 404 returned error can't find the container with id c3b19531b333b7d03df785c5e3fd25d1eeba8ccb1da22f7e993d8082b132b9a9 Feb 24 00:12:19 crc kubenswrapper[4824]: I0224 00:12:19.988148 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2gh9t"] Feb 24 00:12:19 crc kubenswrapper[4824]: I0224 00:12:19.990280 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2gh9t" Feb 24 00:12:19 crc kubenswrapper[4824]: I0224 00:12:19.996957 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 24 00:12:20 crc kubenswrapper[4824]: I0224 00:12:20.003007 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2gh9t"] Feb 24 00:12:20 crc kubenswrapper[4824]: I0224 00:12:20.179678 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4tlq\" (UniqueName: \"kubernetes.io/projected/ee751741-65c5-4db2-aa84-8c1e6868cf86-kube-api-access-s4tlq\") pod \"redhat-operators-2gh9t\" (UID: \"ee751741-65c5-4db2-aa84-8c1e6868cf86\") " pod="openshift-marketplace/redhat-operators-2gh9t" Feb 24 00:12:20 crc kubenswrapper[4824]: I0224 00:12:20.179760 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee751741-65c5-4db2-aa84-8c1e6868cf86-utilities\") pod \"redhat-operators-2gh9t\" (UID: \"ee751741-65c5-4db2-aa84-8c1e6868cf86\") " pod="openshift-marketplace/redhat-operators-2gh9t" Feb 24 00:12:20 crc kubenswrapper[4824]: I0224 00:12:20.180124 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee751741-65c5-4db2-aa84-8c1e6868cf86-catalog-content\") pod \"redhat-operators-2gh9t\" (UID: \"ee751741-65c5-4db2-aa84-8c1e6868cf86\") " pod="openshift-marketplace/redhat-operators-2gh9t" Feb 24 00:12:20 crc kubenswrapper[4824]: I0224 00:12:20.215799 4824 generic.go:334] "Generic (PLEG): container finished" podID="a392c527-174d-4f66-a7cd-5f625192f3c7" containerID="cbad1694c39409f46b82fab5f1dbe7bae8d23677e9a40e2d515237465aad7438" exitCode=0 Feb 24 00:12:20 crc kubenswrapper[4824]: I0224 00:12:20.215880 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49mft" event={"ID":"a392c527-174d-4f66-a7cd-5f625192f3c7","Type":"ContainerDied","Data":"cbad1694c39409f46b82fab5f1dbe7bae8d23677e9a40e2d515237465aad7438"} Feb 24 00:12:20 crc kubenswrapper[4824]: I0224 00:12:20.215913 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49mft" event={"ID":"a392c527-174d-4f66-a7cd-5f625192f3c7","Type":"ContainerStarted","Data":"c3b19531b333b7d03df785c5e3fd25d1eeba8ccb1da22f7e993d8082b132b9a9"} Feb 24 00:12:20 crc kubenswrapper[4824]: I0224 00:12:20.220808 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gq54" event={"ID":"2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d","Type":"ContainerStarted","Data":"436fe7ae7f38dd1b2015c03bc4e174f6d2505305fb8bdb910e42d905aca7ff72"} Feb 24 00:12:20 crc kubenswrapper[4824]: I0224 00:12:20.226163 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9t5cw" event={"ID":"9da3bd34-bc43-4c9d-a974-a131ad945913","Type":"ContainerStarted","Data":"dd40460b2a99c7b75ae256225c8a47501b34e8656c2a46b2f3826d3108123382"} Feb 24 00:12:20 crc kubenswrapper[4824]: I0224 00:12:20.281600 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee751741-65c5-4db2-aa84-8c1e6868cf86-catalog-content\") pod \"redhat-operators-2gh9t\" (UID: \"ee751741-65c5-4db2-aa84-8c1e6868cf86\") " pod="openshift-marketplace/redhat-operators-2gh9t" Feb 24 00:12:20 crc kubenswrapper[4824]: I0224 00:12:20.281667 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4tlq\" (UniqueName: \"kubernetes.io/projected/ee751741-65c5-4db2-aa84-8c1e6868cf86-kube-api-access-s4tlq\") pod \"redhat-operators-2gh9t\" (UID: \"ee751741-65c5-4db2-aa84-8c1e6868cf86\") " pod="openshift-marketplace/redhat-operators-2gh9t" Feb 24 00:12:20 crc kubenswrapper[4824]: I0224 00:12:20.282363 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee751741-65c5-4db2-aa84-8c1e6868cf86-catalog-content\") pod \"redhat-operators-2gh9t\" (UID: \"ee751741-65c5-4db2-aa84-8c1e6868cf86\") " pod="openshift-marketplace/redhat-operators-2gh9t" Feb 24 00:12:20 crc kubenswrapper[4824]: I0224 00:12:20.282606 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee751741-65c5-4db2-aa84-8c1e6868cf86-utilities\") pod \"redhat-operators-2gh9t\" (UID: \"ee751741-65c5-4db2-aa84-8c1e6868cf86\") " pod="openshift-marketplace/redhat-operators-2gh9t" Feb 24 00:12:20 crc kubenswrapper[4824]: I0224 00:12:20.283551 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee751741-65c5-4db2-aa84-8c1e6868cf86-utilities\") pod \"redhat-operators-2gh9t\" (UID: \"ee751741-65c5-4db2-aa84-8c1e6868cf86\") " pod="openshift-marketplace/redhat-operators-2gh9t" Feb 24 00:12:20 crc kubenswrapper[4824]: I0224 00:12:20.305684 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4tlq\" (UniqueName: \"kubernetes.io/projected/ee751741-65c5-4db2-aa84-8c1e6868cf86-kube-api-access-s4tlq\") pod \"redhat-operators-2gh9t\" (UID: \"ee751741-65c5-4db2-aa84-8c1e6868cf86\") " pod="openshift-marketplace/redhat-operators-2gh9t" Feb 24 00:12:20 crc kubenswrapper[4824]: I0224 00:12:20.319970 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2gh9t" Feb 24 00:12:20 crc kubenswrapper[4824]: I0224 00:12:20.748971 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9t5cw" podStartSLOduration=2.22147007 podStartE2EDuration="4.748945513s" podCreationTimestamp="2026-02-24 00:12:16 +0000 UTC" firstStartedPulling="2026-02-24 00:12:17.166734143 +0000 UTC m=+401.156358612" lastFinishedPulling="2026-02-24 00:12:19.694209576 +0000 UTC m=+403.683834055" observedRunningTime="2026-02-24 00:12:20.284528746 +0000 UTC m=+404.274153215" watchObservedRunningTime="2026-02-24 00:12:20.748945513 +0000 UTC m=+404.738569992" Feb 24 00:12:20 crc kubenswrapper[4824]: I0224 00:12:20.754224 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2gh9t"] Feb 24 00:12:20 crc kubenswrapper[4824]: W0224 00:12:20.761954 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee751741_65c5_4db2_aa84_8c1e6868cf86.slice/crio-02b0896a182546de17adf69ee63091996ac3b3d450e051ee0549762f4472ca94 WatchSource:0}: Error finding container 02b0896a182546de17adf69ee63091996ac3b3d450e051ee0549762f4472ca94: Status 404 returned error can't find the container with id 02b0896a182546de17adf69ee63091996ac3b3d450e051ee0549762f4472ca94 Feb 24 00:12:21 crc kubenswrapper[4824]: I0224 00:12:21.233982 4824 generic.go:334] "Generic (PLEG): container finished" podID="ee751741-65c5-4db2-aa84-8c1e6868cf86" containerID="aebf29b53c0744e4fc62470b2b490adcbfe0e860d41469fd83ba66726889c35d" exitCode=0 Feb 24 00:12:21 crc kubenswrapper[4824]: I0224 00:12:21.234176 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gh9t" event={"ID":"ee751741-65c5-4db2-aa84-8c1e6868cf86","Type":"ContainerDied","Data":"aebf29b53c0744e4fc62470b2b490adcbfe0e860d41469fd83ba66726889c35d"} Feb 24 00:12:21 crc kubenswrapper[4824]: I0224 00:12:21.234400 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gh9t" event={"ID":"ee751741-65c5-4db2-aa84-8c1e6868cf86","Type":"ContainerStarted","Data":"02b0896a182546de17adf69ee63091996ac3b3d450e051ee0549762f4472ca94"} Feb 24 00:12:21 crc kubenswrapper[4824]: I0224 00:12:21.240956 4824 generic.go:334] "Generic (PLEG): container finished" podID="a392c527-174d-4f66-a7cd-5f625192f3c7" containerID="c010092ad92facedf5f70ec1775c1db569552e65318cda2aa08c7010e7ba052b" exitCode=0 Feb 24 00:12:21 crc kubenswrapper[4824]: I0224 00:12:21.241033 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49mft" event={"ID":"a392c527-174d-4f66-a7cd-5f625192f3c7","Type":"ContainerDied","Data":"c010092ad92facedf5f70ec1775c1db569552e65318cda2aa08c7010e7ba052b"} Feb 24 00:12:21 crc kubenswrapper[4824]: I0224 00:12:21.247732 4824 generic.go:334] "Generic (PLEG): container finished" podID="2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d" containerID="436fe7ae7f38dd1b2015c03bc4e174f6d2505305fb8bdb910e42d905aca7ff72" exitCode=0 Feb 24 00:12:21 crc kubenswrapper[4824]: I0224 00:12:21.248413 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gq54" event={"ID":"2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d","Type":"ContainerDied","Data":"436fe7ae7f38dd1b2015c03bc4e174f6d2505305fb8bdb910e42d905aca7ff72"} Feb 24 00:12:22 crc kubenswrapper[4824]: I0224 00:12:22.254956 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gq54" event={"ID":"2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d","Type":"ContainerStarted","Data":"af14323064ed760129bdf058136730f8060a37bfcd1a17a32c767285d91d8ee3"} Feb 24 00:12:22 crc kubenswrapper[4824]: I0224 00:12:22.258261 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gh9t" event={"ID":"ee751741-65c5-4db2-aa84-8c1e6868cf86","Type":"ContainerStarted","Data":"f9b584b25f5128765a5a2edfc6aa7e541ad6845c731eef3b059f02685564264b"} Feb 24 00:12:22 crc kubenswrapper[4824]: I0224 00:12:22.260531 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49mft" event={"ID":"a392c527-174d-4f66-a7cd-5f625192f3c7","Type":"ContainerStarted","Data":"50e709672894167a7582bd8375871ba68b8ed732f6c8d86a62300ec1ec37ba43"} Feb 24 00:12:22 crc kubenswrapper[4824]: I0224 00:12:22.293305 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9gq54" podStartSLOduration=2.640629552 podStartE2EDuration="5.293285697s" podCreationTimestamp="2026-02-24 00:12:17 +0000 UTC" firstStartedPulling="2026-02-24 00:12:19.195320016 +0000 UTC m=+403.184944485" lastFinishedPulling="2026-02-24 00:12:21.847976161 +0000 UTC m=+405.837600630" observedRunningTime="2026-02-24 00:12:22.277931387 +0000 UTC m=+406.267555866" watchObservedRunningTime="2026-02-24 00:12:22.293285697 +0000 UTC m=+406.282910156" Feb 24 00:12:22 crc kubenswrapper[4824]: I0224 00:12:22.306461 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-49mft" podStartSLOduration=2.628951762 podStartE2EDuration="4.306445086s" podCreationTimestamp="2026-02-24 00:12:18 +0000 UTC" firstStartedPulling="2026-02-24 00:12:20.218662785 +0000 UTC m=+404.208287254" lastFinishedPulling="2026-02-24 00:12:21.896156109 +0000 UTC m=+405.885780578" observedRunningTime="2026-02-24 00:12:22.291628601 +0000 UTC m=+406.281253080" watchObservedRunningTime="2026-02-24 00:12:22.306445086 +0000 UTC m=+406.296069555" Feb 24 00:12:23 crc kubenswrapper[4824]: I0224 00:12:23.268720 4824 generic.go:334] "Generic (PLEG): container finished" podID="ee751741-65c5-4db2-aa84-8c1e6868cf86" containerID="f9b584b25f5128765a5a2edfc6aa7e541ad6845c731eef3b059f02685564264b" exitCode=0 Feb 24 00:12:23 crc kubenswrapper[4824]: I0224 00:12:23.269655 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gh9t" event={"ID":"ee751741-65c5-4db2-aa84-8c1e6868cf86","Type":"ContainerDied","Data":"f9b584b25f5128765a5a2edfc6aa7e541ad6845c731eef3b059f02685564264b"} Feb 24 00:12:24 crc kubenswrapper[4824]: I0224 00:12:24.276843 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2gh9t" event={"ID":"ee751741-65c5-4db2-aa84-8c1e6868cf86","Type":"ContainerStarted","Data":"db3d9cd416e0d097d4637e15ef0a6a67e6e68a2816d7eb41bb00fca055ead6a3"} Feb 24 00:12:24 crc kubenswrapper[4824]: I0224 00:12:24.302192 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2gh9t" podStartSLOduration=2.703407038 podStartE2EDuration="5.30217187s" podCreationTimestamp="2026-02-24 00:12:19 +0000 UTC" firstStartedPulling="2026-02-24 00:12:21.238805076 +0000 UTC m=+405.228429545" lastFinishedPulling="2026-02-24 00:12:23.837569908 +0000 UTC m=+407.827194377" observedRunningTime="2026-02-24 00:12:24.29890198 +0000 UTC m=+408.288526449" watchObservedRunningTime="2026-02-24 00:12:24.30217187 +0000 UTC m=+408.291796349" Feb 24 00:12:26 crc kubenswrapper[4824]: I0224 00:12:26.533753 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:12:26 crc kubenswrapper[4824]: I0224 00:12:26.534607 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9t5cw" Feb 24 00:12:26 crc kubenswrapper[4824]: I0224 00:12:26.534984 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9t5cw" Feb 24 00:12:26 crc kubenswrapper[4824]: I0224 00:12:26.578294 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9t5cw" Feb 24 00:12:27 crc kubenswrapper[4824]: I0224 00:12:27.342664 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9t5cw" Feb 24 00:12:27 crc kubenswrapper[4824]: I0224 00:12:27.901303 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9gq54" Feb 24 00:12:27 crc kubenswrapper[4824]: I0224 00:12:27.901788 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9gq54" Feb 24 00:12:27 crc kubenswrapper[4824]: I0224 00:12:27.949875 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9gq54" Feb 24 00:12:28 crc kubenswrapper[4824]: I0224 00:12:28.030688 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:12:28 crc kubenswrapper[4824]: I0224 00:12:28.034476 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:12:28 crc kubenswrapper[4824]: I0224 00:12:28.307030 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:12:28 crc kubenswrapper[4824]: I0224 00:12:28.343097 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9gq54" Feb 24 00:12:28 crc kubenswrapper[4824]: I0224 00:12:28.906172 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-49mft" Feb 24 00:12:28 crc kubenswrapper[4824]: I0224 00:12:28.906236 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-49mft" Feb 24 00:12:28 crc kubenswrapper[4824]: I0224 00:12:28.944849 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-49mft" Feb 24 00:12:29 crc kubenswrapper[4824]: I0224 00:12:29.353683 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-49mft" Feb 24 00:12:30 crc kubenswrapper[4824]: I0224 00:12:30.320437 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2gh9t" Feb 24 00:12:30 crc kubenswrapper[4824]: I0224 00:12:30.321015 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2gh9t" Feb 24 00:12:30 crc kubenswrapper[4824]: I0224 00:12:30.358246 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2gh9t" Feb 24 00:12:31 crc kubenswrapper[4824]: I0224 00:12:31.357406 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2gh9t" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.308109 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jqdmp"] Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.310543 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jqdmp" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.314206 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.314471 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.323272 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jqdmp"] Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.332613 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.369316 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mdj24"] Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.370334 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.370999 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bxg2\" (UniqueName: \"kubernetes.io/projected/1c407e9b-e49e-46a5-8920-786aad1539fb-kube-api-access-2bxg2\") pod \"marketplace-operator-79b997595-jqdmp\" (UID: \"1c407e9b-e49e-46a5-8920-786aad1539fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqdmp" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.371194 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1c407e9b-e49e-46a5-8920-786aad1539fb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jqdmp\" (UID: \"1c407e9b-e49e-46a5-8920-786aad1539fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqdmp" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.371428 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c407e9b-e49e-46a5-8920-786aad1539fb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jqdmp\" (UID: \"1c407e9b-e49e-46a5-8920-786aad1539fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqdmp" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.442382 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mdj24"] Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.472165 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/95bae32e-6c93-43ad-a262-14032654e69e-trusted-ca\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.472236 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c407e9b-e49e-46a5-8920-786aad1539fb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jqdmp\" (UID: \"1c407e9b-e49e-46a5-8920-786aad1539fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqdmp" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.472265 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/95bae32e-6c93-43ad-a262-14032654e69e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.472303 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.472328 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/95bae32e-6c93-43ad-a262-14032654e69e-registry-certificates\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.472358 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bxg2\" (UniqueName: \"kubernetes.io/projected/1c407e9b-e49e-46a5-8920-786aad1539fb-kube-api-access-2bxg2\") pod \"marketplace-operator-79b997595-jqdmp\" (UID: \"1c407e9b-e49e-46a5-8920-786aad1539fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqdmp" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.472386 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw7br\" (UniqueName: \"kubernetes.io/projected/95bae32e-6c93-43ad-a262-14032654e69e-kube-api-access-lw7br\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.472429 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/95bae32e-6c93-43ad-a262-14032654e69e-bound-sa-token\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.472456 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/95bae32e-6c93-43ad-a262-14032654e69e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.472496 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1c407e9b-e49e-46a5-8920-786aad1539fb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jqdmp\" (UID: \"1c407e9b-e49e-46a5-8920-786aad1539fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqdmp" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.472618 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/95bae32e-6c93-43ad-a262-14032654e69e-registry-tls\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.474122 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c407e9b-e49e-46a5-8920-786aad1539fb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jqdmp\" (UID: \"1c407e9b-e49e-46a5-8920-786aad1539fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqdmp" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.497859 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1c407e9b-e49e-46a5-8920-786aad1539fb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jqdmp\" (UID: \"1c407e9b-e49e-46a5-8920-786aad1539fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqdmp" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.552558 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.556515 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bxg2\" (UniqueName: \"kubernetes.io/projected/1c407e9b-e49e-46a5-8920-786aad1539fb-kube-api-access-2bxg2\") pod \"marketplace-operator-79b997595-jqdmp\" (UID: \"1c407e9b-e49e-46a5-8920-786aad1539fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqdmp" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.577020 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/95bae32e-6c93-43ad-a262-14032654e69e-registry-tls\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.577064 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/95bae32e-6c93-43ad-a262-14032654e69e-trusted-ca\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.577086 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/95bae32e-6c93-43ad-a262-14032654e69e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.577119 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/95bae32e-6c93-43ad-a262-14032654e69e-registry-certificates\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.577143 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw7br\" (UniqueName: \"kubernetes.io/projected/95bae32e-6c93-43ad-a262-14032654e69e-kube-api-access-lw7br\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.577174 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/95bae32e-6c93-43ad-a262-14032654e69e-bound-sa-token\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.577191 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/95bae32e-6c93-43ad-a262-14032654e69e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.577962 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/95bae32e-6c93-43ad-a262-14032654e69e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.579504 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/95bae32e-6c93-43ad-a262-14032654e69e-registry-certificates\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.579680 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/95bae32e-6c93-43ad-a262-14032654e69e-trusted-ca\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.583567 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/95bae32e-6c93-43ad-a262-14032654e69e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.594491 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/95bae32e-6c93-43ad-a262-14032654e69e-registry-tls\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.615493 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/95bae32e-6c93-43ad-a262-14032654e69e-bound-sa-token\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.618964 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw7br\" (UniqueName: \"kubernetes.io/projected/95bae32e-6c93-43ad-a262-14032654e69e-kube-api-access-lw7br\") pod \"image-registry-66df7c8f76-mdj24\" (UID: \"95bae32e-6c93-43ad-a262-14032654e69e\") " pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.629202 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jqdmp" Feb 24 00:12:39 crc kubenswrapper[4824]: I0224 00:12:39.685195 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:40 crc kubenswrapper[4824]: I0224 00:12:40.082714 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jqdmp"] Feb 24 00:12:40 crc kubenswrapper[4824]: I0224 00:12:40.144271 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mdj24"] Feb 24 00:12:40 crc kubenswrapper[4824]: W0224 00:12:40.149647 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95bae32e_6c93_43ad_a262_14032654e69e.slice/crio-ce2790e4ef255bf5d31045dcdfab8c1196c0c965ebb3b510526417d1480eca89 WatchSource:0}: Error finding container ce2790e4ef255bf5d31045dcdfab8c1196c0c965ebb3b510526417d1480eca89: Status 404 returned error can't find the container with id ce2790e4ef255bf5d31045dcdfab8c1196c0c965ebb3b510526417d1480eca89 Feb 24 00:12:40 crc kubenswrapper[4824]: I0224 00:12:40.362479 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jqdmp" event={"ID":"1c407e9b-e49e-46a5-8920-786aad1539fb","Type":"ContainerStarted","Data":"314dcd6c9f107655302da7241ac56ace2263b4bc7ed0de5a2b8a32c290fd9e2f"} Feb 24 00:12:40 crc kubenswrapper[4824]: I0224 00:12:40.362578 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jqdmp" event={"ID":"1c407e9b-e49e-46a5-8920-786aad1539fb","Type":"ContainerStarted","Data":"b74e58155541accb98ac9d70450b2a5cd4993f3029590721d053dbbd82db2a6d"} Feb 24 00:12:40 crc kubenswrapper[4824]: I0224 00:12:40.366355 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" event={"ID":"95bae32e-6c93-43ad-a262-14032654e69e","Type":"ContainerStarted","Data":"0d00490f13978efe897a977f84d2f0a897e06b25125ef927ff90ac6ddb0fa1dc"} Feb 24 00:12:40 crc kubenswrapper[4824]: I0224 00:12:40.366679 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:40 crc kubenswrapper[4824]: I0224 00:12:40.366793 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" event={"ID":"95bae32e-6c93-43ad-a262-14032654e69e","Type":"ContainerStarted","Data":"ce2790e4ef255bf5d31045dcdfab8c1196c0c965ebb3b510526417d1480eca89"} Feb 24 00:12:40 crc kubenswrapper[4824]: I0224 00:12:40.386466 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jqdmp" podStartSLOduration=1.386433242 podStartE2EDuration="1.386433242s" podCreationTimestamp="2026-02-24 00:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:12:40.380704165 +0000 UTC m=+424.370328654" watchObservedRunningTime="2026-02-24 00:12:40.386433242 +0000 UTC m=+424.376057711" Feb 24 00:12:40 crc kubenswrapper[4824]: I0224 00:12:40.410999 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" podStartSLOduration=1.410972053 podStartE2EDuration="1.410972053s" podCreationTimestamp="2026-02-24 00:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:12:40.408000681 +0000 UTC m=+424.397625170" watchObservedRunningTime="2026-02-24 00:12:40.410972053 +0000 UTC m=+424.400596522" Feb 24 00:12:41 crc kubenswrapper[4824]: I0224 00:12:41.372348 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jqdmp" Feb 24 00:12:41 crc kubenswrapper[4824]: I0224 00:12:41.375190 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jqdmp" Feb 24 00:12:53 crc kubenswrapper[4824]: I0224 00:12:53.276266 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:12:53 crc kubenswrapper[4824]: I0224 00:12:53.276980 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:12:59 crc kubenswrapper[4824]: I0224 00:12:59.692025 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-mdj24" Feb 24 00:12:59 crc kubenswrapper[4824]: I0224 00:12:59.822170 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ccm27"] Feb 24 00:13:13 crc kubenswrapper[4824]: I0224 00:13:13.810739 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:13:13 crc kubenswrapper[4824]: I0224 00:13:13.811307 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:13:13 crc kubenswrapper[4824]: I0224 00:13:13.812730 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:13:13 crc kubenswrapper[4824]: I0224 00:13:13.817326 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:13:13 crc kubenswrapper[4824]: I0224 00:13:13.894941 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:13:14 crc kubenswrapper[4824]: I0224 00:13:14.576049 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ddd5396cef94d398a8f1a79fe5f2cb1b3279a90c8560d5f7dd6294be0bb10b17"} Feb 24 00:13:14 crc kubenswrapper[4824]: I0224 00:13:14.576746 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"90738039084f55791a3d63e013cc7682f6c33620cbd2e333e8f898f890afb896"} Feb 24 00:13:14 crc kubenswrapper[4824]: I0224 00:13:14.826080 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:13:14 crc kubenswrapper[4824]: I0224 00:13:14.826181 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:13:14 crc kubenswrapper[4824]: I0224 00:13:14.831484 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:13:14 crc kubenswrapper[4824]: I0224 00:13:14.834787 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:13:14 crc kubenswrapper[4824]: I0224 00:13:14.995373 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:13:15 crc kubenswrapper[4824]: I0224 00:13:15.102844 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:13:15 crc kubenswrapper[4824]: W0224 00:13:15.204271 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-c0f2a4cae34d3f187f11caccad67cdd7133fe47d97e8687a9d7e96747e39a060 WatchSource:0}: Error finding container c0f2a4cae34d3f187f11caccad67cdd7133fe47d97e8687a9d7e96747e39a060: Status 404 returned error can't find the container with id c0f2a4cae34d3f187f11caccad67cdd7133fe47d97e8687a9d7e96747e39a060 Feb 24 00:13:15 crc kubenswrapper[4824]: W0224 00:13:15.302576 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-e38be0f7ea2ce587e69b12592191eaf560277882839ac1ba66f2bc05f2f5b85d WatchSource:0}: Error finding container e38be0f7ea2ce587e69b12592191eaf560277882839ac1ba66f2bc05f2f5b85d: Status 404 returned error can't find the container with id e38be0f7ea2ce587e69b12592191eaf560277882839ac1ba66f2bc05f2f5b85d Feb 24 00:13:15 crc kubenswrapper[4824]: I0224 00:13:15.583819 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"152e6c347c5790dfebc84dd07ddc2793cc704355e1a548c5fda9185e071c390e"} Feb 24 00:13:15 crc kubenswrapper[4824]: I0224 00:13:15.583879 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c0f2a4cae34d3f187f11caccad67cdd7133fe47d97e8687a9d7e96747e39a060"} Feb 24 00:13:16 crc kubenswrapper[4824]: I0224 00:13:15.585858 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"69b779f2ca4324453996885939e7be3c2c022c7db2214f41b97d3b80f5afe0c5"} Feb 24 00:13:16 crc kubenswrapper[4824]: I0224 00:13:15.585913 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e38be0f7ea2ce587e69b12592191eaf560277882839ac1ba66f2bc05f2f5b85d"} Feb 24 00:13:16 crc kubenswrapper[4824]: I0224 00:13:15.586614 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:13:23 crc kubenswrapper[4824]: I0224 00:13:23.277081 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:13:23 crc kubenswrapper[4824]: I0224 00:13:23.277715 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:13:24 crc kubenswrapper[4824]: I0224 00:13:24.870466 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" podUID="9016587d-3cd5-46d7-bd50-586cd32933f7" containerName="registry" containerID="cri-o://2298440fb6247a85651f7297a9c931c5be1fbe577d08f83bab4ec4ae10e9e795" gracePeriod=30 Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.242984 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.391882 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9016587d-3cd5-46d7-bd50-586cd32933f7-installation-pull-secrets\") pod \"9016587d-3cd5-46d7-bd50-586cd32933f7\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.391975 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9016587d-3cd5-46d7-bd50-586cd32933f7-bound-sa-token\") pod \"9016587d-3cd5-46d7-bd50-586cd32933f7\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.392016 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9016587d-3cd5-46d7-bd50-586cd32933f7-ca-trust-extracted\") pod \"9016587d-3cd5-46d7-bd50-586cd32933f7\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.392307 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"9016587d-3cd5-46d7-bd50-586cd32933f7\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.392374 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdkzl\" (UniqueName: \"kubernetes.io/projected/9016587d-3cd5-46d7-bd50-586cd32933f7-kube-api-access-cdkzl\") pod \"9016587d-3cd5-46d7-bd50-586cd32933f7\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.392401 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9016587d-3cd5-46d7-bd50-586cd32933f7-trusted-ca\") pod \"9016587d-3cd5-46d7-bd50-586cd32933f7\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.392511 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9016587d-3cd5-46d7-bd50-586cd32933f7-registry-certificates\") pod \"9016587d-3cd5-46d7-bd50-586cd32933f7\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.392567 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9016587d-3cd5-46d7-bd50-586cd32933f7-registry-tls\") pod \"9016587d-3cd5-46d7-bd50-586cd32933f7\" (UID: \"9016587d-3cd5-46d7-bd50-586cd32933f7\") " Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.394192 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9016587d-3cd5-46d7-bd50-586cd32933f7-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9016587d-3cd5-46d7-bd50-586cd32933f7" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.394209 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9016587d-3cd5-46d7-bd50-586cd32933f7-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9016587d-3cd5-46d7-bd50-586cd32933f7" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.400134 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9016587d-3cd5-46d7-bd50-586cd32933f7-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9016587d-3cd5-46d7-bd50-586cd32933f7" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.401384 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9016587d-3cd5-46d7-bd50-586cd32933f7-kube-api-access-cdkzl" (OuterVolumeSpecName: "kube-api-access-cdkzl") pod "9016587d-3cd5-46d7-bd50-586cd32933f7" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7"). InnerVolumeSpecName "kube-api-access-cdkzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.401864 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9016587d-3cd5-46d7-bd50-586cd32933f7-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9016587d-3cd5-46d7-bd50-586cd32933f7" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.401963 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9016587d-3cd5-46d7-bd50-586cd32933f7-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9016587d-3cd5-46d7-bd50-586cd32933f7" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.405570 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "9016587d-3cd5-46d7-bd50-586cd32933f7" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.409809 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9016587d-3cd5-46d7-bd50-586cd32933f7-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9016587d-3cd5-46d7-bd50-586cd32933f7" (UID: "9016587d-3cd5-46d7-bd50-586cd32933f7"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.494097 4824 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9016587d-3cd5-46d7-bd50-586cd32933f7-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.494894 4824 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9016587d-3cd5-46d7-bd50-586cd32933f7-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.494931 4824 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9016587d-3cd5-46d7-bd50-586cd32933f7-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.494945 4824 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9016587d-3cd5-46d7-bd50-586cd32933f7-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.494959 4824 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9016587d-3cd5-46d7-bd50-586cd32933f7-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.494972 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdkzl\" (UniqueName: \"kubernetes.io/projected/9016587d-3cd5-46d7-bd50-586cd32933f7-kube-api-access-cdkzl\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.494983 4824 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9016587d-3cd5-46d7-bd50-586cd32933f7-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.652937 4824 generic.go:334] "Generic (PLEG): container finished" podID="9016587d-3cd5-46d7-bd50-586cd32933f7" containerID="2298440fb6247a85651f7297a9c931c5be1fbe577d08f83bab4ec4ae10e9e795" exitCode=0 Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.652999 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" event={"ID":"9016587d-3cd5-46d7-bd50-586cd32933f7","Type":"ContainerDied","Data":"2298440fb6247a85651f7297a9c931c5be1fbe577d08f83bab4ec4ae10e9e795"} Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.653041 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" event={"ID":"9016587d-3cd5-46d7-bd50-586cd32933f7","Type":"ContainerDied","Data":"7258e3c460d9eb30e7b444c92e1cb2427c103a3e9b4014b73c4a4fe6cecde128"} Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.653072 4824 scope.go:117] "RemoveContainer" containerID="2298440fb6247a85651f7297a9c931c5be1fbe577d08f83bab4ec4ae10e9e795" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.653259 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ccm27" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.700328 4824 scope.go:117] "RemoveContainer" containerID="2298440fb6247a85651f7297a9c931c5be1fbe577d08f83bab4ec4ae10e9e795" Feb 24 00:13:25 crc kubenswrapper[4824]: E0224 00:13:25.703730 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2298440fb6247a85651f7297a9c931c5be1fbe577d08f83bab4ec4ae10e9e795\": container with ID starting with 2298440fb6247a85651f7297a9c931c5be1fbe577d08f83bab4ec4ae10e9e795 not found: ID does not exist" containerID="2298440fb6247a85651f7297a9c931c5be1fbe577d08f83bab4ec4ae10e9e795" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.703857 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2298440fb6247a85651f7297a9c931c5be1fbe577d08f83bab4ec4ae10e9e795"} err="failed to get container status \"2298440fb6247a85651f7297a9c931c5be1fbe577d08f83bab4ec4ae10e9e795\": rpc error: code = NotFound desc = could not find container \"2298440fb6247a85651f7297a9c931c5be1fbe577d08f83bab4ec4ae10e9e795\": container with ID starting with 2298440fb6247a85651f7297a9c931c5be1fbe577d08f83bab4ec4ae10e9e795 not found: ID does not exist" Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.736106 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ccm27"] Feb 24 00:13:25 crc kubenswrapper[4824]: I0224 00:13:25.741197 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ccm27"] Feb 24 00:13:26 crc kubenswrapper[4824]: I0224 00:13:26.701752 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9016587d-3cd5-46d7-bd50-586cd32933f7" path="/var/lib/kubelet/pods/9016587d-3cd5-46d7-bd50-586cd32933f7/volumes" Feb 24 00:13:45 crc kubenswrapper[4824]: I0224 00:13:45.109372 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:13:53 crc kubenswrapper[4824]: I0224 00:13:53.275961 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:13:53 crc kubenswrapper[4824]: I0224 00:13:53.276544 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:13:53 crc kubenswrapper[4824]: I0224 00:13:53.276637 4824 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:13:53 crc kubenswrapper[4824]: I0224 00:13:53.277707 4824 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec5f29f7aaf13391c2278f1eb972e5c2f9ed40d998b7f6d08d6d97e54173df94"} pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:13:53 crc kubenswrapper[4824]: I0224 00:13:53.277837 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" containerID="cri-o://ec5f29f7aaf13391c2278f1eb972e5c2f9ed40d998b7f6d08d6d97e54173df94" gracePeriod=600 Feb 24 00:13:53 crc kubenswrapper[4824]: I0224 00:13:53.846450 4824 generic.go:334] "Generic (PLEG): container finished" podID="939ca085-9383-42e6-b7d6-37f101137273" containerID="ec5f29f7aaf13391c2278f1eb972e5c2f9ed40d998b7f6d08d6d97e54173df94" exitCode=0 Feb 24 00:13:53 crc kubenswrapper[4824]: I0224 00:13:53.846540 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerDied","Data":"ec5f29f7aaf13391c2278f1eb972e5c2f9ed40d998b7f6d08d6d97e54173df94"} Feb 24 00:13:53 crc kubenswrapper[4824]: I0224 00:13:53.846843 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerStarted","Data":"14f28b64a526a9334cfaacd13a3a23756d3ea46670a60bcfe695a7e80551056e"} Feb 24 00:13:53 crc kubenswrapper[4824]: I0224 00:13:53.846868 4824 scope.go:117] "RemoveContainer" containerID="13c56d6a66d6912c9ae019eb515bfeb043d9ce26a200da5330581d07cd849ba3" Feb 24 00:15:00 crc kubenswrapper[4824]: I0224 00:15:00.171879 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn"] Feb 24 00:15:00 crc kubenswrapper[4824]: E0224 00:15:00.172855 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9016587d-3cd5-46d7-bd50-586cd32933f7" containerName="registry" Feb 24 00:15:00 crc kubenswrapper[4824]: I0224 00:15:00.172877 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="9016587d-3cd5-46d7-bd50-586cd32933f7" containerName="registry" Feb 24 00:15:00 crc kubenswrapper[4824]: I0224 00:15:00.173024 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="9016587d-3cd5-46d7-bd50-586cd32933f7" containerName="registry" Feb 24 00:15:00 crc kubenswrapper[4824]: I0224 00:15:00.173683 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn" Feb 24 00:15:00 crc kubenswrapper[4824]: I0224 00:15:00.177777 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 00:15:00 crc kubenswrapper[4824]: I0224 00:15:00.181508 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 00:15:00 crc kubenswrapper[4824]: I0224 00:15:00.182840 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn"] Feb 24 00:15:00 crc kubenswrapper[4824]: I0224 00:15:00.223434 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98a4dbc5-a115-4a3a-a5cb-36a037813cc0-secret-volume\") pod \"collect-profiles-29531535-6rfpn\" (UID: \"98a4dbc5-a115-4a3a-a5cb-36a037813cc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn" Feb 24 00:15:00 crc kubenswrapper[4824]: I0224 00:15:00.223581 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6qjz\" (UniqueName: \"kubernetes.io/projected/98a4dbc5-a115-4a3a-a5cb-36a037813cc0-kube-api-access-t6qjz\") pod \"collect-profiles-29531535-6rfpn\" (UID: \"98a4dbc5-a115-4a3a-a5cb-36a037813cc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn" Feb 24 00:15:00 crc kubenswrapper[4824]: I0224 00:15:00.223682 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98a4dbc5-a115-4a3a-a5cb-36a037813cc0-config-volume\") pod \"collect-profiles-29531535-6rfpn\" (UID: \"98a4dbc5-a115-4a3a-a5cb-36a037813cc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn" Feb 24 00:15:00 crc kubenswrapper[4824]: I0224 00:15:00.325013 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6qjz\" (UniqueName: \"kubernetes.io/projected/98a4dbc5-a115-4a3a-a5cb-36a037813cc0-kube-api-access-t6qjz\") pod \"collect-profiles-29531535-6rfpn\" (UID: \"98a4dbc5-a115-4a3a-a5cb-36a037813cc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn" Feb 24 00:15:00 crc kubenswrapper[4824]: I0224 00:15:00.325103 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98a4dbc5-a115-4a3a-a5cb-36a037813cc0-config-volume\") pod \"collect-profiles-29531535-6rfpn\" (UID: \"98a4dbc5-a115-4a3a-a5cb-36a037813cc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn" Feb 24 00:15:00 crc kubenswrapper[4824]: I0224 00:15:00.325137 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98a4dbc5-a115-4a3a-a5cb-36a037813cc0-secret-volume\") pod \"collect-profiles-29531535-6rfpn\" (UID: \"98a4dbc5-a115-4a3a-a5cb-36a037813cc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn" Feb 24 00:15:00 crc kubenswrapper[4824]: I0224 00:15:00.326855 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98a4dbc5-a115-4a3a-a5cb-36a037813cc0-config-volume\") pod \"collect-profiles-29531535-6rfpn\" (UID: \"98a4dbc5-a115-4a3a-a5cb-36a037813cc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn" Feb 24 00:15:00 crc kubenswrapper[4824]: I0224 00:15:00.331736 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98a4dbc5-a115-4a3a-a5cb-36a037813cc0-secret-volume\") pod \"collect-profiles-29531535-6rfpn\" (UID: \"98a4dbc5-a115-4a3a-a5cb-36a037813cc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn" Feb 24 00:15:00 crc kubenswrapper[4824]: I0224 00:15:00.340961 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6qjz\" (UniqueName: \"kubernetes.io/projected/98a4dbc5-a115-4a3a-a5cb-36a037813cc0-kube-api-access-t6qjz\") pod \"collect-profiles-29531535-6rfpn\" (UID: \"98a4dbc5-a115-4a3a-a5cb-36a037813cc0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn" Feb 24 00:15:00 crc kubenswrapper[4824]: I0224 00:15:00.503948 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn" Feb 24 00:15:00 crc kubenswrapper[4824]: I0224 00:15:00.712224 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn"] Feb 24 00:15:01 crc kubenswrapper[4824]: I0224 00:15:01.295845 4824 generic.go:334] "Generic (PLEG): container finished" podID="98a4dbc5-a115-4a3a-a5cb-36a037813cc0" containerID="93001c421cb7488ac129c49bbd0067fad898c23e83f878d2ef1ccb98ffc04df3" exitCode=0 Feb 24 00:15:01 crc kubenswrapper[4824]: I0224 00:15:01.295895 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn" event={"ID":"98a4dbc5-a115-4a3a-a5cb-36a037813cc0","Type":"ContainerDied","Data":"93001c421cb7488ac129c49bbd0067fad898c23e83f878d2ef1ccb98ffc04df3"} Feb 24 00:15:01 crc kubenswrapper[4824]: I0224 00:15:01.295955 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn" event={"ID":"98a4dbc5-a115-4a3a-a5cb-36a037813cc0","Type":"ContainerStarted","Data":"e1a569931aa90fecd5361a82afa72a79e893d766a73d1cfde3102edb0ac7ae3a"} Feb 24 00:15:02 crc kubenswrapper[4824]: I0224 00:15:02.567443 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn" Feb 24 00:15:02 crc kubenswrapper[4824]: I0224 00:15:02.754063 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98a4dbc5-a115-4a3a-a5cb-36a037813cc0-secret-volume\") pod \"98a4dbc5-a115-4a3a-a5cb-36a037813cc0\" (UID: \"98a4dbc5-a115-4a3a-a5cb-36a037813cc0\") " Feb 24 00:15:02 crc kubenswrapper[4824]: I0224 00:15:02.754570 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6qjz\" (UniqueName: \"kubernetes.io/projected/98a4dbc5-a115-4a3a-a5cb-36a037813cc0-kube-api-access-t6qjz\") pod \"98a4dbc5-a115-4a3a-a5cb-36a037813cc0\" (UID: \"98a4dbc5-a115-4a3a-a5cb-36a037813cc0\") " Feb 24 00:15:02 crc kubenswrapper[4824]: I0224 00:15:02.754750 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98a4dbc5-a115-4a3a-a5cb-36a037813cc0-config-volume\") pod \"98a4dbc5-a115-4a3a-a5cb-36a037813cc0\" (UID: \"98a4dbc5-a115-4a3a-a5cb-36a037813cc0\") " Feb 24 00:15:02 crc kubenswrapper[4824]: I0224 00:15:02.755412 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a4dbc5-a115-4a3a-a5cb-36a037813cc0-config-volume" (OuterVolumeSpecName: "config-volume") pod "98a4dbc5-a115-4a3a-a5cb-36a037813cc0" (UID: "98a4dbc5-a115-4a3a-a5cb-36a037813cc0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:15:02 crc kubenswrapper[4824]: I0224 00:15:02.759671 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98a4dbc5-a115-4a3a-a5cb-36a037813cc0-kube-api-access-t6qjz" (OuterVolumeSpecName: "kube-api-access-t6qjz") pod "98a4dbc5-a115-4a3a-a5cb-36a037813cc0" (UID: "98a4dbc5-a115-4a3a-a5cb-36a037813cc0"). InnerVolumeSpecName "kube-api-access-t6qjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:15:02 crc kubenswrapper[4824]: I0224 00:15:02.760687 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98a4dbc5-a115-4a3a-a5cb-36a037813cc0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "98a4dbc5-a115-4a3a-a5cb-36a037813cc0" (UID: "98a4dbc5-a115-4a3a-a5cb-36a037813cc0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:15:02 crc kubenswrapper[4824]: I0224 00:15:02.856056 4824 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98a4dbc5-a115-4a3a-a5cb-36a037813cc0-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:02 crc kubenswrapper[4824]: I0224 00:15:02.856093 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6qjz\" (UniqueName: \"kubernetes.io/projected/98a4dbc5-a115-4a3a-a5cb-36a037813cc0-kube-api-access-t6qjz\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:02 crc kubenswrapper[4824]: I0224 00:15:02.856103 4824 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98a4dbc5-a115-4a3a-a5cb-36a037813cc0-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:03 crc kubenswrapper[4824]: I0224 00:15:03.311661 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn" event={"ID":"98a4dbc5-a115-4a3a-a5cb-36a037813cc0","Type":"ContainerDied","Data":"e1a569931aa90fecd5361a82afa72a79e893d766a73d1cfde3102edb0ac7ae3a"} Feb 24 00:15:03 crc kubenswrapper[4824]: I0224 00:15:03.311730 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-6rfpn" Feb 24 00:15:03 crc kubenswrapper[4824]: I0224 00:15:03.311745 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1a569931aa90fecd5361a82afa72a79e893d766a73d1cfde3102edb0ac7ae3a" Feb 24 00:15:53 crc kubenswrapper[4824]: I0224 00:15:53.276027 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:15:53 crc kubenswrapper[4824]: I0224 00:15:53.278010 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:16:23 crc kubenswrapper[4824]: I0224 00:16:23.276833 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:16:23 crc kubenswrapper[4824]: I0224 00:16:23.277658 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.066636 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4xjg6"] Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.067990 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovn-controller" containerID="cri-o://0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91" gracePeriod=30 Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.068049 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="nbdb" containerID="cri-o://4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7" gracePeriod=30 Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.068214 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="northd" containerID="cri-o://f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04" gracePeriod=30 Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.068247 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="sbdb" containerID="cri-o://05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430" gracePeriod=30 Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.068330 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2" gracePeriod=30 Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.068389 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovn-acl-logging" containerID="cri-o://8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44" gracePeriod=30 Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.068454 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="kube-rbac-proxy-node" containerID="cri-o://869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8" gracePeriod=30 Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.128470 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovnkube-controller" containerID="cri-o://a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee" gracePeriod=30 Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.427211 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovnkube-controller/3.log" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.429491 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovn-acl-logging/0.log" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.430050 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovn-controller/0.log" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.430862 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.489646 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t4spw"] Feb 24 00:16:28 crc kubenswrapper[4824]: E0224 00:16:28.489907 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovnkube-controller" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.489924 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovnkube-controller" Feb 24 00:16:28 crc kubenswrapper[4824]: E0224 00:16:28.489936 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="nbdb" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.489944 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="nbdb" Feb 24 00:16:28 crc kubenswrapper[4824]: E0224 00:16:28.489959 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a4dbc5-a115-4a3a-a5cb-36a037813cc0" containerName="collect-profiles" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.489969 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a4dbc5-a115-4a3a-a5cb-36a037813cc0" containerName="collect-profiles" Feb 24 00:16:28 crc kubenswrapper[4824]: E0224 00:16:28.489981 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovn-acl-logging" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.489990 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovn-acl-logging" Feb 24 00:16:28 crc kubenswrapper[4824]: E0224 00:16:28.490003 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovnkube-controller" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490010 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovnkube-controller" Feb 24 00:16:28 crc kubenswrapper[4824]: E0224 00:16:28.490023 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovnkube-controller" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490031 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovnkube-controller" Feb 24 00:16:28 crc kubenswrapper[4824]: E0224 00:16:28.490044 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="kube-rbac-proxy-node" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490052 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="kube-rbac-proxy-node" Feb 24 00:16:28 crc kubenswrapper[4824]: E0224 00:16:28.490063 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovn-controller" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490071 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovn-controller" Feb 24 00:16:28 crc kubenswrapper[4824]: E0224 00:16:28.490081 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490089 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 00:16:28 crc kubenswrapper[4824]: E0224 00:16:28.490104 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="northd" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490112 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="northd" Feb 24 00:16:28 crc kubenswrapper[4824]: E0224 00:16:28.490123 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="sbdb" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490131 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="sbdb" Feb 24 00:16:28 crc kubenswrapper[4824]: E0224 00:16:28.490143 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="kubecfg-setup" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490151 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="kubecfg-setup" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490258 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovnkube-controller" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490269 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovn-controller" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490283 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="sbdb" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490294 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a4dbc5-a115-4a3a-a5cb-36a037813cc0" containerName="collect-profiles" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490306 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovnkube-controller" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490314 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="kube-rbac-proxy-node" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490327 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="nbdb" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490340 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovnkube-controller" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490352 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490363 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovnkube-controller" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490373 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="northd" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490386 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovn-acl-logging" Feb 24 00:16:28 crc kubenswrapper[4824]: E0224 00:16:28.490489 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovnkube-controller" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490499 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovnkube-controller" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490655 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovnkube-controller" Feb 24 00:16:28 crc kubenswrapper[4824]: E0224 00:16:28.490758 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovnkube-controller" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.490769 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" containerName="ovnkube-controller" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.492711 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550272 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-systemd-units\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550335 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d985b875-dd5e-4767-a4e2-209894575a8f-ovnkube-script-lib\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550371 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6rnj\" (UniqueName: \"kubernetes.io/projected/d985b875-dd5e-4767-a4e2-209894575a8f-kube-api-access-x6rnj\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550392 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-var-lib-openvswitch\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550425 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-cni-bin\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550451 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-run-openvswitch\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550477 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-run-ovn\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550536 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-log-socket\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550612 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d985b875-dd5e-4767-a4e2-209894575a8f-ovn-node-metrics-cert\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550655 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-cni-netd\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550464 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550508 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550564 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550600 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550656 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550686 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-log-socket" (OuterVolumeSpecName: "log-socket") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550693 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-run-systemd\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550813 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550856 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d985b875-dd5e-4767-a4e2-209894575a8f-env-overrides\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550897 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-node-log\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550922 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-etc-openvswitch\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550956 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-kubelet\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550997 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-run-netns\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551020 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-run-ovn-kubernetes\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551051 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d985b875-dd5e-4767-a4e2-209894575a8f-ovnkube-config\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551076 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-slash\") pod \"d985b875-dd5e-4767-a4e2-209894575a8f\" (UID: \"d985b875-dd5e-4767-a4e2-209894575a8f\") " Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550853 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d985b875-dd5e-4767-a4e2-209894575a8f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.550879 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551079 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d985b875-dd5e-4767-a4e2-209894575a8f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551103 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551152 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-node-log" (OuterVolumeSpecName: "node-log") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551169 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551188 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551211 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551291 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-kubelet\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551323 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551352 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-run-systemd\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551373 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87aca778-6541-4f0e-a507-ead5a3fda02b-ovnkube-script-lib\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551399 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-cni-netd\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551458 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d985b875-dd5e-4767-a4e2-209894575a8f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551460 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87aca778-6541-4f0e-a507-ead5a3fda02b-ovn-node-metrics-cert\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551499 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-run-ovn-kubernetes\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551583 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-slash" (OuterVolumeSpecName: "host-slash") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551612 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87aca778-6541-4f0e-a507-ead5a3fda02b-env-overrides\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551638 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551659 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-etc-openvswitch\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551719 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhggx\" (UniqueName: \"kubernetes.io/projected/87aca778-6541-4f0e-a507-ead5a3fda02b-kube-api-access-xhggx\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551777 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-systemd-units\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551808 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-slash\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551842 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-var-lib-openvswitch\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551897 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-run-ovn\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.551968 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-log-socket\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552004 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-run-openvswitch\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552099 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-node-log\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552122 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87aca778-6541-4f0e-a507-ead5a3fda02b-ovnkube-config\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552149 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-run-netns\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552171 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-cni-bin\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552325 4824 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-node-log\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552348 4824 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552366 4824 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552383 4824 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552399 4824 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552414 4824 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d985b875-dd5e-4767-a4e2-209894575a8f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552432 4824 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-slash\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552447 4824 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552466 4824 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d985b875-dd5e-4767-a4e2-209894575a8f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552481 4824 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552496 4824 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552551 4824 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552568 4824 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552582 4824 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-log-socket\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552600 4824 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552616 4824 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.552631 4824 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d985b875-dd5e-4767-a4e2-209894575a8f-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.556600 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d985b875-dd5e-4767-a4e2-209894575a8f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.556708 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d985b875-dd5e-4767-a4e2-209894575a8f-kube-api-access-x6rnj" (OuterVolumeSpecName: "kube-api-access-x6rnj") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "kube-api-access-x6rnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.567750 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d985b875-dd5e-4767-a4e2-209894575a8f" (UID: "d985b875-dd5e-4767-a4e2-209894575a8f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.653716 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-run-systemd\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.653775 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87aca778-6541-4f0e-a507-ead5a3fda02b-ovnkube-script-lib\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.653801 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-cni-netd\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.653829 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87aca778-6541-4f0e-a507-ead5a3fda02b-ovn-node-metrics-cert\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.653850 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-run-ovn-kubernetes\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.653893 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87aca778-6541-4f0e-a507-ead5a3fda02b-env-overrides\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.653922 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-etc-openvswitch\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.653933 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-cni-netd\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.653943 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhggx\" (UniqueName: \"kubernetes.io/projected/87aca778-6541-4f0e-a507-ead5a3fda02b-kube-api-access-xhggx\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654060 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-systemd-units\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654100 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-run-ovn-kubernetes\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654127 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-systemd-units\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654152 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-slash\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654056 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-etc-openvswitch\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.653933 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-run-systemd\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654224 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-var-lib-openvswitch\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654259 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-run-ovn\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654296 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-var-lib-openvswitch\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654309 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-log-socket\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654209 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-slash\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654345 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-run-openvswitch\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654395 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-run-ovn\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654405 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-log-socket\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654446 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-node-log\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654484 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-node-log\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654483 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-run-openvswitch\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654552 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87aca778-6541-4f0e-a507-ead5a3fda02b-ovnkube-config\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654622 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-run-netns\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654626 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/87aca778-6541-4f0e-a507-ead5a3fda02b-env-overrides\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654646 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-cni-bin\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654709 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-run-netns\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654789 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-cni-bin\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654836 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-kubelet\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654873 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-kubelet\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654900 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.654960 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/87aca778-6541-4f0e-a507-ead5a3fda02b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.655045 4824 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d985b875-dd5e-4767-a4e2-209894575a8f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.655078 4824 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d985b875-dd5e-4767-a4e2-209894575a8f-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.655109 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6rnj\" (UniqueName: \"kubernetes.io/projected/d985b875-dd5e-4767-a4e2-209894575a8f-kube-api-access-x6rnj\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.655911 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/87aca778-6541-4f0e-a507-ead5a3fda02b-ovnkube-config\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.656134 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/87aca778-6541-4f0e-a507-ead5a3fda02b-ovnkube-script-lib\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.659920 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/87aca778-6541-4f0e-a507-ead5a3fda02b-ovn-node-metrics-cert\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.684850 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhggx\" (UniqueName: \"kubernetes.io/projected/87aca778-6541-4f0e-a507-ead5a3fda02b-kube-api-access-xhggx\") pod \"ovnkube-node-t4spw\" (UID: \"87aca778-6541-4f0e-a507-ead5a3fda02b\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.810023 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:28 crc kubenswrapper[4824]: W0224 00:16:28.843738 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87aca778_6541_4f0e_a507_ead5a3fda02b.slice/crio-a79a38c4264a1af2b0bc4a17800759a0a464d70c47dd6c31a10502eb8c455761 WatchSource:0}: Error finding container a79a38c4264a1af2b0bc4a17800759a0a464d70c47dd6c31a10502eb8c455761: Status 404 returned error can't find the container with id a79a38c4264a1af2b0bc4a17800759a0a464d70c47dd6c31a10502eb8c455761 Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.873719 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovnkube-controller/3.log" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.876439 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovn-acl-logging/0.log" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.877168 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4xjg6_d985b875-dd5e-4767-a4e2-209894575a8f/ovn-controller/0.log" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.877630 4824 generic.go:334] "Generic (PLEG): container finished" podID="d985b875-dd5e-4767-a4e2-209894575a8f" containerID="a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee" exitCode=0 Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.877655 4824 generic.go:334] "Generic (PLEG): container finished" podID="d985b875-dd5e-4767-a4e2-209894575a8f" containerID="05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430" exitCode=0 Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.877666 4824 generic.go:334] "Generic (PLEG): container finished" podID="d985b875-dd5e-4767-a4e2-209894575a8f" containerID="4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7" exitCode=0 Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.877677 4824 generic.go:334] "Generic (PLEG): container finished" podID="d985b875-dd5e-4767-a4e2-209894575a8f" containerID="f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04" exitCode=0 Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.877685 4824 generic.go:334] "Generic (PLEG): container finished" podID="d985b875-dd5e-4767-a4e2-209894575a8f" containerID="b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2" exitCode=0 Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.877697 4824 generic.go:334] "Generic (PLEG): container finished" podID="d985b875-dd5e-4767-a4e2-209894575a8f" containerID="869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8" exitCode=0 Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.877706 4824 generic.go:334] "Generic (PLEG): container finished" podID="d985b875-dd5e-4767-a4e2-209894575a8f" containerID="8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44" exitCode=143 Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.877715 4824 generic.go:334] "Generic (PLEG): container finished" podID="d985b875-dd5e-4767-a4e2-209894575a8f" containerID="0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91" exitCode=143 Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.877750 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerDied","Data":"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.877798 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerDied","Data":"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.877810 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerDied","Data":"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.877805 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.877835 4824 scope.go:117] "RemoveContainer" containerID="a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.877821 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerDied","Data":"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878016 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerDied","Data":"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878036 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerDied","Data":"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878050 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878061 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878069 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878077 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878083 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878090 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878096 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878102 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878109 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878118 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerDied","Data":"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878129 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878141 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878147 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878153 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878159 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878166 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878175 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878184 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878190 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878196 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878207 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerDied","Data":"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878218 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878226 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878233 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878240 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878246 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878251 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878257 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878261 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878266 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878271 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878280 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4xjg6" event={"ID":"d985b875-dd5e-4767-a4e2-209894575a8f","Type":"ContainerDied","Data":"39c21b24d26f0ce7cc1f64fcb5e9960f6a2487988e095495d5e73beb90c5e099"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878288 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878293 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878299 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878304 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878308 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878313 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878318 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878322 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878327 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.878332 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.879137 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" event={"ID":"87aca778-6541-4f0e-a507-ead5a3fda02b","Type":"ContainerStarted","Data":"a79a38c4264a1af2b0bc4a17800759a0a464d70c47dd6c31a10502eb8c455761"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.880487 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wvqfl_15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac/kube-multus/2.log" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.880971 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wvqfl_15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac/kube-multus/1.log" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.881003 4824 generic.go:334] "Generic (PLEG): container finished" podID="15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac" containerID="e2df584c430cf17f7bb0674c0cc149453f39f49408337d9789565a34a1bfcb68" exitCode=2 Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.881031 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wvqfl" event={"ID":"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac","Type":"ContainerDied","Data":"e2df584c430cf17f7bb0674c0cc149453f39f49408337d9789565a34a1bfcb68"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.881049 4824 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a39d567bc7ae642dd15ce2ea8650e1aa0fd3688e3c3c061ad145edf2bf963c06"} Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.881415 4824 scope.go:117] "RemoveContainer" containerID="e2df584c430cf17f7bb0674c0cc149453f39f49408337d9789565a34a1bfcb68" Feb 24 00:16:28 crc kubenswrapper[4824]: E0224 00:16:28.881628 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-wvqfl_openshift-multus(15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac)\"" pod="openshift-multus/multus-wvqfl" podUID="15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.909298 4824 scope.go:117] "RemoveContainer" containerID="5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.928626 4824 scope.go:117] "RemoveContainer" containerID="05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.929195 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4xjg6"] Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.933787 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4xjg6"] Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.944834 4824 scope.go:117] "RemoveContainer" containerID="4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.963187 4824 scope.go:117] "RemoveContainer" containerID="f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04" Feb 24 00:16:28 crc kubenswrapper[4824]: I0224 00:16:28.977029 4824 scope.go:117] "RemoveContainer" containerID="b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.054375 4824 scope.go:117] "RemoveContainer" containerID="869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.070211 4824 scope.go:117] "RemoveContainer" containerID="8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.083942 4824 scope.go:117] "RemoveContainer" containerID="0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.097136 4824 scope.go:117] "RemoveContainer" containerID="1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.112688 4824 scope.go:117] "RemoveContainer" containerID="a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee" Feb 24 00:16:29 crc kubenswrapper[4824]: E0224 00:16:29.113071 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee\": container with ID starting with a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee not found: ID does not exist" containerID="a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.113131 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee"} err="failed to get container status \"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee\": rpc error: code = NotFound desc = could not find container \"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee\": container with ID starting with a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.113165 4824 scope.go:117] "RemoveContainer" containerID="5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac" Feb 24 00:16:29 crc kubenswrapper[4824]: E0224 00:16:29.113755 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac\": container with ID starting with 5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac not found: ID does not exist" containerID="5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.113803 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac"} err="failed to get container status \"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac\": rpc error: code = NotFound desc = could not find container \"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac\": container with ID starting with 5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.113834 4824 scope.go:117] "RemoveContainer" containerID="05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430" Feb 24 00:16:29 crc kubenswrapper[4824]: E0224 00:16:29.114406 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\": container with ID starting with 05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430 not found: ID does not exist" containerID="05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.114467 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430"} err="failed to get container status \"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\": rpc error: code = NotFound desc = could not find container \"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\": container with ID starting with 05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.114489 4824 scope.go:117] "RemoveContainer" containerID="4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7" Feb 24 00:16:29 crc kubenswrapper[4824]: E0224 00:16:29.114968 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\": container with ID starting with 4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7 not found: ID does not exist" containerID="4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.115000 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7"} err="failed to get container status \"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\": rpc error: code = NotFound desc = could not find container \"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\": container with ID starting with 4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.115023 4824 scope.go:117] "RemoveContainer" containerID="f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04" Feb 24 00:16:29 crc kubenswrapper[4824]: E0224 00:16:29.115484 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\": container with ID starting with f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04 not found: ID does not exist" containerID="f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.115503 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04"} err="failed to get container status \"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\": rpc error: code = NotFound desc = could not find container \"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\": container with ID starting with f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.115531 4824 scope.go:117] "RemoveContainer" containerID="b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2" Feb 24 00:16:29 crc kubenswrapper[4824]: E0224 00:16:29.115932 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\": container with ID starting with b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2 not found: ID does not exist" containerID="b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.115952 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2"} err="failed to get container status \"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\": rpc error: code = NotFound desc = could not find container \"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\": container with ID starting with b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.115963 4824 scope.go:117] "RemoveContainer" containerID="869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8" Feb 24 00:16:29 crc kubenswrapper[4824]: E0224 00:16:29.116225 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\": container with ID starting with 869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8 not found: ID does not exist" containerID="869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.116250 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8"} err="failed to get container status \"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\": rpc error: code = NotFound desc = could not find container \"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\": container with ID starting with 869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.116263 4824 scope.go:117] "RemoveContainer" containerID="8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44" Feb 24 00:16:29 crc kubenswrapper[4824]: E0224 00:16:29.116897 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\": container with ID starting with 8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44 not found: ID does not exist" containerID="8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.116935 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44"} err="failed to get container status \"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\": rpc error: code = NotFound desc = could not find container \"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\": container with ID starting with 8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.116956 4824 scope.go:117] "RemoveContainer" containerID="0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91" Feb 24 00:16:29 crc kubenswrapper[4824]: E0224 00:16:29.117470 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\": container with ID starting with 0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91 not found: ID does not exist" containerID="0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.117511 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91"} err="failed to get container status \"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\": rpc error: code = NotFound desc = could not find container \"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\": container with ID starting with 0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.117557 4824 scope.go:117] "RemoveContainer" containerID="1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d" Feb 24 00:16:29 crc kubenswrapper[4824]: E0224 00:16:29.117793 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\": container with ID starting with 1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d not found: ID does not exist" containerID="1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.117824 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d"} err="failed to get container status \"1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\": rpc error: code = NotFound desc = could not find container \"1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\": container with ID starting with 1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.117843 4824 scope.go:117] "RemoveContainer" containerID="a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.118261 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee"} err="failed to get container status \"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee\": rpc error: code = NotFound desc = could not find container \"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee\": container with ID starting with a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.118296 4824 scope.go:117] "RemoveContainer" containerID="5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.118607 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac"} err="failed to get container status \"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac\": rpc error: code = NotFound desc = could not find container \"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac\": container with ID starting with 5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.118666 4824 scope.go:117] "RemoveContainer" containerID="05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.118939 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430"} err="failed to get container status \"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\": rpc error: code = NotFound desc = could not find container \"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\": container with ID starting with 05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.118966 4824 scope.go:117] "RemoveContainer" containerID="4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.119209 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7"} err="failed to get container status \"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\": rpc error: code = NotFound desc = could not find container \"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\": container with ID starting with 4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.119236 4824 scope.go:117] "RemoveContainer" containerID="f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.119694 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04"} err="failed to get container status \"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\": rpc error: code = NotFound desc = could not find container \"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\": container with ID starting with f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.119723 4824 scope.go:117] "RemoveContainer" containerID="b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.120145 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2"} err="failed to get container status \"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\": rpc error: code = NotFound desc = could not find container \"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\": container with ID starting with b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.120174 4824 scope.go:117] "RemoveContainer" containerID="869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.120504 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8"} err="failed to get container status \"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\": rpc error: code = NotFound desc = could not find container \"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\": container with ID starting with 869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.120631 4824 scope.go:117] "RemoveContainer" containerID="8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.121095 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44"} err="failed to get container status \"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\": rpc error: code = NotFound desc = could not find container \"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\": container with ID starting with 8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.121123 4824 scope.go:117] "RemoveContainer" containerID="0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.121583 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91"} err="failed to get container status \"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\": rpc error: code = NotFound desc = could not find container \"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\": container with ID starting with 0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.121610 4824 scope.go:117] "RemoveContainer" containerID="1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.121821 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d"} err="failed to get container status \"1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\": rpc error: code = NotFound desc = could not find container \"1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\": container with ID starting with 1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.121847 4824 scope.go:117] "RemoveContainer" containerID="a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.122231 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee"} err="failed to get container status \"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee\": rpc error: code = NotFound desc = could not find container \"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee\": container with ID starting with a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.122259 4824 scope.go:117] "RemoveContainer" containerID="5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.122536 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac"} err="failed to get container status \"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac\": rpc error: code = NotFound desc = could not find container \"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac\": container with ID starting with 5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.122564 4824 scope.go:117] "RemoveContainer" containerID="05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.122866 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430"} err="failed to get container status \"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\": rpc error: code = NotFound desc = could not find container \"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\": container with ID starting with 05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.122892 4824 scope.go:117] "RemoveContainer" containerID="4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.123292 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7"} err="failed to get container status \"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\": rpc error: code = NotFound desc = could not find container \"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\": container with ID starting with 4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.123323 4824 scope.go:117] "RemoveContainer" containerID="f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.123616 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04"} err="failed to get container status \"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\": rpc error: code = NotFound desc = could not find container \"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\": container with ID starting with f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.123642 4824 scope.go:117] "RemoveContainer" containerID="b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.123911 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2"} err="failed to get container status \"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\": rpc error: code = NotFound desc = could not find container \"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\": container with ID starting with b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.123935 4824 scope.go:117] "RemoveContainer" containerID="869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.124359 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8"} err="failed to get container status \"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\": rpc error: code = NotFound desc = could not find container \"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\": container with ID starting with 869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.124386 4824 scope.go:117] "RemoveContainer" containerID="8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.124690 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44"} err="failed to get container status \"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\": rpc error: code = NotFound desc = could not find container \"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\": container with ID starting with 8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.124730 4824 scope.go:117] "RemoveContainer" containerID="0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.125043 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91"} err="failed to get container status \"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\": rpc error: code = NotFound desc = could not find container \"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\": container with ID starting with 0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.125080 4824 scope.go:117] "RemoveContainer" containerID="1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.125568 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d"} err="failed to get container status \"1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\": rpc error: code = NotFound desc = could not find container \"1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\": container with ID starting with 1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.125596 4824 scope.go:117] "RemoveContainer" containerID="a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.125817 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee"} err="failed to get container status \"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee\": rpc error: code = NotFound desc = could not find container \"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee\": container with ID starting with a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.125846 4824 scope.go:117] "RemoveContainer" containerID="5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.126386 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac"} err="failed to get container status \"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac\": rpc error: code = NotFound desc = could not find container \"5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac\": container with ID starting with 5817c201e3633a7aa1c9aa18cf5c3bf28f2a23c56257be31e935e9e29bd522ac not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.126413 4824 scope.go:117] "RemoveContainer" containerID="05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.127032 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430"} err="failed to get container status \"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\": rpc error: code = NotFound desc = could not find container \"05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430\": container with ID starting with 05015554a7e8c72722c7f73491f24e41b294d2038dead07caff46440b0a52430 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.127060 4824 scope.go:117] "RemoveContainer" containerID="4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.127376 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7"} err="failed to get container status \"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\": rpc error: code = NotFound desc = could not find container \"4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7\": container with ID starting with 4bdc7abfd0f25251633d17ba431c36dd07c19b7f3a701ac3ea74a12796534fa7 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.127404 4824 scope.go:117] "RemoveContainer" containerID="f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.127726 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04"} err="failed to get container status \"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\": rpc error: code = NotFound desc = could not find container \"f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04\": container with ID starting with f961e1ae6ad003099155a819d7589b3ba8d5bbdc65e7bd1fa6b52ee88c489d04 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.127753 4824 scope.go:117] "RemoveContainer" containerID="b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.128151 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2"} err="failed to get container status \"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\": rpc error: code = NotFound desc = could not find container \"b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2\": container with ID starting with b933c4614e02f1261c4ac6b6b76c9ca4b4a0f66b500097d02f9e99cd6714c0c2 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.128176 4824 scope.go:117] "RemoveContainer" containerID="869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.128421 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8"} err="failed to get container status \"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\": rpc error: code = NotFound desc = could not find container \"869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8\": container with ID starting with 869a378050c78b5bfbce937f15dbad3e040afe6125d7cf374215ef0e1af9c2e8 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.128449 4824 scope.go:117] "RemoveContainer" containerID="8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.128883 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44"} err="failed to get container status \"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\": rpc error: code = NotFound desc = could not find container \"8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44\": container with ID starting with 8c207a8cee28dac50b2a822bec679f42de8ebbc1e3ad3fb020a6fdfc6886ac44 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.128915 4824 scope.go:117] "RemoveContainer" containerID="0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.129152 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91"} err="failed to get container status \"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\": rpc error: code = NotFound desc = could not find container \"0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91\": container with ID starting with 0fa8ec48b6d5542d77c867525ce787c3e7e0cd74f87cfb263a9ed31917927b91 not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.129177 4824 scope.go:117] "RemoveContainer" containerID="1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.129449 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d"} err="failed to get container status \"1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\": rpc error: code = NotFound desc = could not find container \"1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d\": container with ID starting with 1cd4549610e3a7e2be3c5937d7b3ea2f2395accfb4af6e505168220b33cda00d not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.129474 4824 scope.go:117] "RemoveContainer" containerID="a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.129912 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee"} err="failed to get container status \"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee\": rpc error: code = NotFound desc = could not find container \"a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee\": container with ID starting with a7e1ce655b2a5d070089d56de6a8dfa5549a4154911bfccd6be2b38df9c3b5ee not found: ID does not exist" Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.892394 4824 generic.go:334] "Generic (PLEG): container finished" podID="87aca778-6541-4f0e-a507-ead5a3fda02b" containerID="e9b5d71c6d6ab2a571df3b8c2466c1f1518b8f93d49d67a2431aaac13abdd818" exitCode=0 Feb 24 00:16:29 crc kubenswrapper[4824]: I0224 00:16:29.892470 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" event={"ID":"87aca778-6541-4f0e-a507-ead5a3fda02b","Type":"ContainerDied","Data":"e9b5d71c6d6ab2a571df3b8c2466c1f1518b8f93d49d67a2431aaac13abdd818"} Feb 24 00:16:30 crc kubenswrapper[4824]: I0224 00:16:30.702783 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d985b875-dd5e-4767-a4e2-209894575a8f" path="/var/lib/kubelet/pods/d985b875-dd5e-4767-a4e2-209894575a8f/volumes" Feb 24 00:16:30 crc kubenswrapper[4824]: I0224 00:16:30.910353 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" event={"ID":"87aca778-6541-4f0e-a507-ead5a3fda02b","Type":"ContainerStarted","Data":"abfa01f497b5a2fef06efa2d3b4f068777bd2e7c24eea5b2af12267365af91da"} Feb 24 00:16:30 crc kubenswrapper[4824]: I0224 00:16:30.910404 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" event={"ID":"87aca778-6541-4f0e-a507-ead5a3fda02b","Type":"ContainerStarted","Data":"e334e54c3bed63e8e5351cb59818891a193cf2d05dbbe6298f837bb697f7a687"} Feb 24 00:16:30 crc kubenswrapper[4824]: I0224 00:16:30.910418 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" event={"ID":"87aca778-6541-4f0e-a507-ead5a3fda02b","Type":"ContainerStarted","Data":"3bba66335f2e0738f5bc158c1dac715556ad4f81990f2963122432435a924c2d"} Feb 24 00:16:30 crc kubenswrapper[4824]: I0224 00:16:30.910430 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" event={"ID":"87aca778-6541-4f0e-a507-ead5a3fda02b","Type":"ContainerStarted","Data":"1a0bf0abc9b07ec27b01757009988bc9cb44d1dd824ec4b16ede5d468a11f4be"} Feb 24 00:16:30 crc kubenswrapper[4824]: I0224 00:16:30.910444 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" event={"ID":"87aca778-6541-4f0e-a507-ead5a3fda02b","Type":"ContainerStarted","Data":"fa243a57aebd7ec4e6329f2bfb9eeb8b165ec5b1c0f91d4a044881000a37b7c1"} Feb 24 00:16:30 crc kubenswrapper[4824]: I0224 00:16:30.910456 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" event={"ID":"87aca778-6541-4f0e-a507-ead5a3fda02b","Type":"ContainerStarted","Data":"1fa7b8e9723463aebc01d4e46d77b21c283dd62251d21446425e1df801448768"} Feb 24 00:16:33 crc kubenswrapper[4824]: I0224 00:16:33.935931 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" event={"ID":"87aca778-6541-4f0e-a507-ead5a3fda02b","Type":"ContainerStarted","Data":"8c1150a736dd7bf2f64ca69df260687e576899e7e644ecb3cd1b00e9d01a6231"} Feb 24 00:16:35 crc kubenswrapper[4824]: I0224 00:16:35.954016 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" event={"ID":"87aca778-6541-4f0e-a507-ead5a3fda02b","Type":"ContainerStarted","Data":"b491b1a957c21214b1275b30eb068f5ca35e1048f7cc55ec02f64d18569a5307"} Feb 24 00:16:35 crc kubenswrapper[4824]: I0224 00:16:35.954460 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:35 crc kubenswrapper[4824]: I0224 00:16:35.954535 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:35 crc kubenswrapper[4824]: I0224 00:16:35.984834 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:35 crc kubenswrapper[4824]: I0224 00:16:35.988309 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" podStartSLOduration=7.9882871269999995 podStartE2EDuration="7.988287127s" podCreationTimestamp="2026-02-24 00:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:16:35.984469548 +0000 UTC m=+659.974094027" watchObservedRunningTime="2026-02-24 00:16:35.988287127 +0000 UTC m=+659.977911616" Feb 24 00:16:36 crc kubenswrapper[4824]: I0224 00:16:36.960923 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:36 crc kubenswrapper[4824]: I0224 00:16:36.991446 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:37 crc kubenswrapper[4824]: I0224 00:16:37.151569 4824 scope.go:117] "RemoveContainer" containerID="a39d567bc7ae642dd15ce2ea8650e1aa0fd3688e3c3c061ad145edf2bf963c06" Feb 24 00:16:37 crc kubenswrapper[4824]: I0224 00:16:37.966572 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wvqfl_15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac/kube-multus/2.log" Feb 24 00:16:43 crc kubenswrapper[4824]: I0224 00:16:43.693746 4824 scope.go:117] "RemoveContainer" containerID="e2df584c430cf17f7bb0674c0cc149453f39f49408337d9789565a34a1bfcb68" Feb 24 00:16:43 crc kubenswrapper[4824]: E0224 00:16:43.694431 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-wvqfl_openshift-multus(15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac)\"" pod="openshift-multus/multus-wvqfl" podUID="15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac" Feb 24 00:16:53 crc kubenswrapper[4824]: I0224 00:16:53.276229 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:16:53 crc kubenswrapper[4824]: I0224 00:16:53.277013 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:16:53 crc kubenswrapper[4824]: I0224 00:16:53.277105 4824 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:16:53 crc kubenswrapper[4824]: I0224 00:16:53.278227 4824 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"14f28b64a526a9334cfaacd13a3a23756d3ea46670a60bcfe695a7e80551056e"} pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:16:53 crc kubenswrapper[4824]: I0224 00:16:53.278336 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" containerID="cri-o://14f28b64a526a9334cfaacd13a3a23756d3ea46670a60bcfe695a7e80551056e" gracePeriod=600 Feb 24 00:16:54 crc kubenswrapper[4824]: I0224 00:16:54.095818 4824 generic.go:334] "Generic (PLEG): container finished" podID="939ca085-9383-42e6-b7d6-37f101137273" containerID="14f28b64a526a9334cfaacd13a3a23756d3ea46670a60bcfe695a7e80551056e" exitCode=0 Feb 24 00:16:54 crc kubenswrapper[4824]: I0224 00:16:54.095897 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerDied","Data":"14f28b64a526a9334cfaacd13a3a23756d3ea46670a60bcfe695a7e80551056e"} Feb 24 00:16:54 crc kubenswrapper[4824]: I0224 00:16:54.096131 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerStarted","Data":"43fc5998f7ab77a1ca73519cb6a4280f5869d3a50153e1dc6202d26bc4d9b6a3"} Feb 24 00:16:54 crc kubenswrapper[4824]: I0224 00:16:54.096155 4824 scope.go:117] "RemoveContainer" containerID="ec5f29f7aaf13391c2278f1eb972e5c2f9ed40d998b7f6d08d6d97e54173df94" Feb 24 00:16:58 crc kubenswrapper[4824]: I0224 00:16:58.694160 4824 scope.go:117] "RemoveContainer" containerID="e2df584c430cf17f7bb0674c0cc149453f39f49408337d9789565a34a1bfcb68" Feb 24 00:16:58 crc kubenswrapper[4824]: I0224 00:16:58.839906 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t4spw" Feb 24 00:16:59 crc kubenswrapper[4824]: I0224 00:16:59.129383 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wvqfl_15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac/kube-multus/2.log" Feb 24 00:16:59 crc kubenswrapper[4824]: I0224 00:16:59.129444 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wvqfl" event={"ID":"15b9ae43-8f87-4f2f-a8d9-b55c8fa986ac","Type":"ContainerStarted","Data":"ebbf9d60a6e27302379e600ac283f0a46e39af0887f9444dc1533d94512c6024"} Feb 24 00:17:27 crc kubenswrapper[4824]: I0224 00:17:27.867096 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-49mft"] Feb 24 00:17:27 crc kubenswrapper[4824]: I0224 00:17:27.868573 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-49mft" podUID="a392c527-174d-4f66-a7cd-5f625192f3c7" containerName="registry-server" containerID="cri-o://50e709672894167a7582bd8375871ba68b8ed732f6c8d86a62300ec1ec37ba43" gracePeriod=30 Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.256464 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-49mft" Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.332673 4824 generic.go:334] "Generic (PLEG): container finished" podID="a392c527-174d-4f66-a7cd-5f625192f3c7" containerID="50e709672894167a7582bd8375871ba68b8ed732f6c8d86a62300ec1ec37ba43" exitCode=0 Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.332725 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49mft" event={"ID":"a392c527-174d-4f66-a7cd-5f625192f3c7","Type":"ContainerDied","Data":"50e709672894167a7582bd8375871ba68b8ed732f6c8d86a62300ec1ec37ba43"} Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.332763 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-49mft" event={"ID":"a392c527-174d-4f66-a7cd-5f625192f3c7","Type":"ContainerDied","Data":"c3b19531b333b7d03df785c5e3fd25d1eeba8ccb1da22f7e993d8082b132b9a9"} Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.332781 4824 scope.go:117] "RemoveContainer" containerID="50e709672894167a7582bd8375871ba68b8ed732f6c8d86a62300ec1ec37ba43" Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.332809 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-49mft" Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.334293 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a392c527-174d-4f66-a7cd-5f625192f3c7-utilities\") pod \"a392c527-174d-4f66-a7cd-5f625192f3c7\" (UID: \"a392c527-174d-4f66-a7cd-5f625192f3c7\") " Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.334344 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hxsv\" (UniqueName: \"kubernetes.io/projected/a392c527-174d-4f66-a7cd-5f625192f3c7-kube-api-access-5hxsv\") pod \"a392c527-174d-4f66-a7cd-5f625192f3c7\" (UID: \"a392c527-174d-4f66-a7cd-5f625192f3c7\") " Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.334429 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a392c527-174d-4f66-a7cd-5f625192f3c7-catalog-content\") pod \"a392c527-174d-4f66-a7cd-5f625192f3c7\" (UID: \"a392c527-174d-4f66-a7cd-5f625192f3c7\") " Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.335684 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a392c527-174d-4f66-a7cd-5f625192f3c7-utilities" (OuterVolumeSpecName: "utilities") pod "a392c527-174d-4f66-a7cd-5f625192f3c7" (UID: "a392c527-174d-4f66-a7cd-5f625192f3c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.341576 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a392c527-174d-4f66-a7cd-5f625192f3c7-kube-api-access-5hxsv" (OuterVolumeSpecName: "kube-api-access-5hxsv") pod "a392c527-174d-4f66-a7cd-5f625192f3c7" (UID: "a392c527-174d-4f66-a7cd-5f625192f3c7"). InnerVolumeSpecName "kube-api-access-5hxsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.354384 4824 scope.go:117] "RemoveContainer" containerID="c010092ad92facedf5f70ec1775c1db569552e65318cda2aa08c7010e7ba052b" Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.358878 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a392c527-174d-4f66-a7cd-5f625192f3c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a392c527-174d-4f66-a7cd-5f625192f3c7" (UID: "a392c527-174d-4f66-a7cd-5f625192f3c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.370959 4824 scope.go:117] "RemoveContainer" containerID="cbad1694c39409f46b82fab5f1dbe7bae8d23677e9a40e2d515237465aad7438" Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.388003 4824 scope.go:117] "RemoveContainer" containerID="50e709672894167a7582bd8375871ba68b8ed732f6c8d86a62300ec1ec37ba43" Feb 24 00:17:28 crc kubenswrapper[4824]: E0224 00:17:28.388624 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50e709672894167a7582bd8375871ba68b8ed732f6c8d86a62300ec1ec37ba43\": container with ID starting with 50e709672894167a7582bd8375871ba68b8ed732f6c8d86a62300ec1ec37ba43 not found: ID does not exist" containerID="50e709672894167a7582bd8375871ba68b8ed732f6c8d86a62300ec1ec37ba43" Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.388677 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e709672894167a7582bd8375871ba68b8ed732f6c8d86a62300ec1ec37ba43"} err="failed to get container status \"50e709672894167a7582bd8375871ba68b8ed732f6c8d86a62300ec1ec37ba43\": rpc error: code = NotFound desc = could not find container \"50e709672894167a7582bd8375871ba68b8ed732f6c8d86a62300ec1ec37ba43\": container with ID starting with 50e709672894167a7582bd8375871ba68b8ed732f6c8d86a62300ec1ec37ba43 not found: ID does not exist" Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.388708 4824 scope.go:117] "RemoveContainer" containerID="c010092ad92facedf5f70ec1775c1db569552e65318cda2aa08c7010e7ba052b" Feb 24 00:17:28 crc kubenswrapper[4824]: E0224 00:17:28.389129 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c010092ad92facedf5f70ec1775c1db569552e65318cda2aa08c7010e7ba052b\": container with ID starting with c010092ad92facedf5f70ec1775c1db569552e65318cda2aa08c7010e7ba052b not found: ID does not exist" containerID="c010092ad92facedf5f70ec1775c1db569552e65318cda2aa08c7010e7ba052b" Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.389151 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c010092ad92facedf5f70ec1775c1db569552e65318cda2aa08c7010e7ba052b"} err="failed to get container status \"c010092ad92facedf5f70ec1775c1db569552e65318cda2aa08c7010e7ba052b\": rpc error: code = NotFound desc = could not find container \"c010092ad92facedf5f70ec1775c1db569552e65318cda2aa08c7010e7ba052b\": container with ID starting with c010092ad92facedf5f70ec1775c1db569552e65318cda2aa08c7010e7ba052b not found: ID does not exist" Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.389163 4824 scope.go:117] "RemoveContainer" containerID="cbad1694c39409f46b82fab5f1dbe7bae8d23677e9a40e2d515237465aad7438" Feb 24 00:17:28 crc kubenswrapper[4824]: E0224 00:17:28.389448 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbad1694c39409f46b82fab5f1dbe7bae8d23677e9a40e2d515237465aad7438\": container with ID starting with cbad1694c39409f46b82fab5f1dbe7bae8d23677e9a40e2d515237465aad7438 not found: ID does not exist" containerID="cbad1694c39409f46b82fab5f1dbe7bae8d23677e9a40e2d515237465aad7438" Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.389487 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbad1694c39409f46b82fab5f1dbe7bae8d23677e9a40e2d515237465aad7438"} err="failed to get container status \"cbad1694c39409f46b82fab5f1dbe7bae8d23677e9a40e2d515237465aad7438\": rpc error: code = NotFound desc = could not find container \"cbad1694c39409f46b82fab5f1dbe7bae8d23677e9a40e2d515237465aad7438\": container with ID starting with cbad1694c39409f46b82fab5f1dbe7bae8d23677e9a40e2d515237465aad7438 not found: ID does not exist" Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.435641 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a392c527-174d-4f66-a7cd-5f625192f3c7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.435681 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a392c527-174d-4f66-a7cd-5f625192f3c7-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.435691 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hxsv\" (UniqueName: \"kubernetes.io/projected/a392c527-174d-4f66-a7cd-5f625192f3c7-kube-api-access-5hxsv\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.671947 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-49mft"] Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.675880 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-49mft"] Feb 24 00:17:28 crc kubenswrapper[4824]: I0224 00:17:28.699359 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a392c527-174d-4f66-a7cd-5f625192f3c7" path="/var/lib/kubelet/pods/a392c527-174d-4f66-a7cd-5f625192f3c7/volumes" Feb 24 00:17:31 crc kubenswrapper[4824]: I0224 00:17:31.704236 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92"] Feb 24 00:17:31 crc kubenswrapper[4824]: E0224 00:17:31.704514 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a392c527-174d-4f66-a7cd-5f625192f3c7" containerName="registry-server" Feb 24 00:17:31 crc kubenswrapper[4824]: I0224 00:17:31.704621 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="a392c527-174d-4f66-a7cd-5f625192f3c7" containerName="registry-server" Feb 24 00:17:31 crc kubenswrapper[4824]: E0224 00:17:31.704636 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a392c527-174d-4f66-a7cd-5f625192f3c7" containerName="extract-content" Feb 24 00:17:31 crc kubenswrapper[4824]: I0224 00:17:31.704645 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="a392c527-174d-4f66-a7cd-5f625192f3c7" containerName="extract-content" Feb 24 00:17:31 crc kubenswrapper[4824]: E0224 00:17:31.704658 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a392c527-174d-4f66-a7cd-5f625192f3c7" containerName="extract-utilities" Feb 24 00:17:31 crc kubenswrapper[4824]: I0224 00:17:31.704667 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="a392c527-174d-4f66-a7cd-5f625192f3c7" containerName="extract-utilities" Feb 24 00:17:31 crc kubenswrapper[4824]: I0224 00:17:31.704806 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="a392c527-174d-4f66-a7cd-5f625192f3c7" containerName="registry-server" Feb 24 00:17:31 crc kubenswrapper[4824]: I0224 00:17:31.705729 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" Feb 24 00:17:31 crc kubenswrapper[4824]: I0224 00:17:31.707908 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 24 00:17:31 crc kubenswrapper[4824]: I0224 00:17:31.716074 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92"] Feb 24 00:17:31 crc kubenswrapper[4824]: I0224 00:17:31.875221 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92\" (UID: \"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" Feb 24 00:17:31 crc kubenswrapper[4824]: I0224 00:17:31.875317 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9crzx\" (UniqueName: \"kubernetes.io/projected/5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d-kube-api-access-9crzx\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92\" (UID: \"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" Feb 24 00:17:31 crc kubenswrapper[4824]: I0224 00:17:31.875365 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92\" (UID: \"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" Feb 24 00:17:31 crc kubenswrapper[4824]: I0224 00:17:31.976278 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92\" (UID: \"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" Feb 24 00:17:31 crc kubenswrapper[4824]: I0224 00:17:31.976330 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9crzx\" (UniqueName: \"kubernetes.io/projected/5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d-kube-api-access-9crzx\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92\" (UID: \"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" Feb 24 00:17:31 crc kubenswrapper[4824]: I0224 00:17:31.976363 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92\" (UID: \"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" Feb 24 00:17:31 crc kubenswrapper[4824]: I0224 00:17:31.976979 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92\" (UID: \"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" Feb 24 00:17:31 crc kubenswrapper[4824]: I0224 00:17:31.977015 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92\" (UID: \"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" Feb 24 00:17:32 crc kubenswrapper[4824]: I0224 00:17:32.003353 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9crzx\" (UniqueName: \"kubernetes.io/projected/5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d-kube-api-access-9crzx\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92\" (UID: \"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" Feb 24 00:17:32 crc kubenswrapper[4824]: I0224 00:17:32.029490 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" Feb 24 00:17:32 crc kubenswrapper[4824]: I0224 00:17:32.245413 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92"] Feb 24 00:17:32 crc kubenswrapper[4824]: I0224 00:17:32.365126 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" event={"ID":"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d","Type":"ContainerStarted","Data":"35bbfc639b02e1e6a0f72cc00f210f348d0be10c023c511eb8a617921f11e775"} Feb 24 00:17:33 crc kubenswrapper[4824]: I0224 00:17:33.373160 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" event={"ID":"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d","Type":"ContainerStarted","Data":"8b1b7abd9a3483e4bd0c8ee3dc68d6fb0d8f48e612be434df1ee586f3cb60b45"} Feb 24 00:17:34 crc kubenswrapper[4824]: I0224 00:17:34.379831 4824 generic.go:334] "Generic (PLEG): container finished" podID="5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d" containerID="8b1b7abd9a3483e4bd0c8ee3dc68d6fb0d8f48e612be434df1ee586f3cb60b45" exitCode=0 Feb 24 00:17:34 crc kubenswrapper[4824]: I0224 00:17:34.379899 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" event={"ID":"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d","Type":"ContainerDied","Data":"8b1b7abd9a3483e4bd0c8ee3dc68d6fb0d8f48e612be434df1ee586f3cb60b45"} Feb 24 00:17:34 crc kubenswrapper[4824]: I0224 00:17:34.382731 4824 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.090985 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf"] Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.092600 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf" Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.101069 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf"] Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.147498 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzjsj\" (UniqueName: \"kubernetes.io/projected/7191d6cb-0051-4cd2-a93d-a26af6142eb8-kube-api-access-hzjsj\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf\" (UID: \"7191d6cb-0051-4cd2-a93d-a26af6142eb8\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf" Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.147614 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7191d6cb-0051-4cd2-a93d-a26af6142eb8-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf\" (UID: \"7191d6cb-0051-4cd2-a93d-a26af6142eb8\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf" Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.147753 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7191d6cb-0051-4cd2-a93d-a26af6142eb8-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf\" (UID: \"7191d6cb-0051-4cd2-a93d-a26af6142eb8\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf" Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.249655 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzjsj\" (UniqueName: \"kubernetes.io/projected/7191d6cb-0051-4cd2-a93d-a26af6142eb8-kube-api-access-hzjsj\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf\" (UID: \"7191d6cb-0051-4cd2-a93d-a26af6142eb8\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf" Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.249771 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7191d6cb-0051-4cd2-a93d-a26af6142eb8-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf\" (UID: \"7191d6cb-0051-4cd2-a93d-a26af6142eb8\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf" Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.249809 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7191d6cb-0051-4cd2-a93d-a26af6142eb8-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf\" (UID: \"7191d6cb-0051-4cd2-a93d-a26af6142eb8\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf" Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.250331 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7191d6cb-0051-4cd2-a93d-a26af6142eb8-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf\" (UID: \"7191d6cb-0051-4cd2-a93d-a26af6142eb8\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf" Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.250674 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7191d6cb-0051-4cd2-a93d-a26af6142eb8-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf\" (UID: \"7191d6cb-0051-4cd2-a93d-a26af6142eb8\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf" Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.268416 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzjsj\" (UniqueName: \"kubernetes.io/projected/7191d6cb-0051-4cd2-a93d-a26af6142eb8-kube-api-access-hzjsj\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf\" (UID: \"7191d6cb-0051-4cd2-a93d-a26af6142eb8\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf" Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.417807 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf" Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.903538 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw"] Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.904787 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.922371 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw"] Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.960217 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55bd419c-9f16-434a-9a7f-0693ab6601d4-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw\" (UID: \"55bd419c-9f16-434a-9a7f-0693ab6601d4\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.960399 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h5lz\" (UniqueName: \"kubernetes.io/projected/55bd419c-9f16-434a-9a7f-0693ab6601d4-kube-api-access-5h5lz\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw\" (UID: \"55bd419c-9f16-434a-9a7f-0693ab6601d4\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" Feb 24 00:17:37 crc kubenswrapper[4824]: I0224 00:17:37.960504 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55bd419c-9f16-434a-9a7f-0693ab6601d4-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw\" (UID: \"55bd419c-9f16-434a-9a7f-0693ab6601d4\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" Feb 24 00:17:38 crc kubenswrapper[4824]: I0224 00:17:38.062413 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55bd419c-9f16-434a-9a7f-0693ab6601d4-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw\" (UID: \"55bd419c-9f16-434a-9a7f-0693ab6601d4\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" Feb 24 00:17:38 crc kubenswrapper[4824]: I0224 00:17:38.062589 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55bd419c-9f16-434a-9a7f-0693ab6601d4-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw\" (UID: \"55bd419c-9f16-434a-9a7f-0693ab6601d4\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" Feb 24 00:17:38 crc kubenswrapper[4824]: I0224 00:17:38.062632 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h5lz\" (UniqueName: \"kubernetes.io/projected/55bd419c-9f16-434a-9a7f-0693ab6601d4-kube-api-access-5h5lz\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw\" (UID: \"55bd419c-9f16-434a-9a7f-0693ab6601d4\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" Feb 24 00:17:38 crc kubenswrapper[4824]: I0224 00:17:38.063390 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55bd419c-9f16-434a-9a7f-0693ab6601d4-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw\" (UID: \"55bd419c-9f16-434a-9a7f-0693ab6601d4\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" Feb 24 00:17:38 crc kubenswrapper[4824]: I0224 00:17:38.063405 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55bd419c-9f16-434a-9a7f-0693ab6601d4-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw\" (UID: \"55bd419c-9f16-434a-9a7f-0693ab6601d4\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" Feb 24 00:17:38 crc kubenswrapper[4824]: I0224 00:17:38.084682 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h5lz\" (UniqueName: \"kubernetes.io/projected/55bd419c-9f16-434a-9a7f-0693ab6601d4-kube-api-access-5h5lz\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw\" (UID: \"55bd419c-9f16-434a-9a7f-0693ab6601d4\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" Feb 24 00:17:38 crc kubenswrapper[4824]: I0224 00:17:38.221417 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" Feb 24 00:17:38 crc kubenswrapper[4824]: I0224 00:17:38.425876 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw"] Feb 24 00:17:38 crc kubenswrapper[4824]: I0224 00:17:38.465946 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf"] Feb 24 00:17:38 crc kubenswrapper[4824]: W0224 00:17:38.473800 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7191d6cb_0051_4cd2_a93d_a26af6142eb8.slice/crio-36022042664f199397e29d69a9aa82c3b910fb25a3770a289d7d8170493cb3cf WatchSource:0}: Error finding container 36022042664f199397e29d69a9aa82c3b910fb25a3770a289d7d8170493cb3cf: Status 404 returned error can't find the container with id 36022042664f199397e29d69a9aa82c3b910fb25a3770a289d7d8170493cb3cf Feb 24 00:17:39 crc kubenswrapper[4824]: I0224 00:17:39.411890 4824 generic.go:334] "Generic (PLEG): container finished" podID="55bd419c-9f16-434a-9a7f-0693ab6601d4" containerID="af924869ded52d4061cfbf48fbbd43b0fbd16756f85c03cfaad1ab7c3977040d" exitCode=0 Feb 24 00:17:39 crc kubenswrapper[4824]: I0224 00:17:39.412044 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" event={"ID":"55bd419c-9f16-434a-9a7f-0693ab6601d4","Type":"ContainerDied","Data":"af924869ded52d4061cfbf48fbbd43b0fbd16756f85c03cfaad1ab7c3977040d"} Feb 24 00:17:39 crc kubenswrapper[4824]: I0224 00:17:39.412083 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" event={"ID":"55bd419c-9f16-434a-9a7f-0693ab6601d4","Type":"ContainerStarted","Data":"3a70ebd7757696ac2b83b90dea2f46e099f4cc03587c64932de0b8718290ef9a"} Feb 24 00:17:39 crc kubenswrapper[4824]: I0224 00:17:39.414272 4824 generic.go:334] "Generic (PLEG): container finished" podID="5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d" containerID="496810b67b0efb6898ef84404f173e4e18bf3b537c6c09681e91f7b78d8dfe8a" exitCode=0 Feb 24 00:17:39 crc kubenswrapper[4824]: I0224 00:17:39.414579 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" event={"ID":"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d","Type":"ContainerDied","Data":"496810b67b0efb6898ef84404f173e4e18bf3b537c6c09681e91f7b78d8dfe8a"} Feb 24 00:17:39 crc kubenswrapper[4824]: I0224 00:17:39.417158 4824 generic.go:334] "Generic (PLEG): container finished" podID="7191d6cb-0051-4cd2-a93d-a26af6142eb8" containerID="b8b6c3ce166e52f1d27c7cb3a651a6489b7c1dc51c75081ac1ab7350971c8f9b" exitCode=0 Feb 24 00:17:39 crc kubenswrapper[4824]: I0224 00:17:39.417207 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf" event={"ID":"7191d6cb-0051-4cd2-a93d-a26af6142eb8","Type":"ContainerDied","Data":"b8b6c3ce166e52f1d27c7cb3a651a6489b7c1dc51c75081ac1ab7350971c8f9b"} Feb 24 00:17:39 crc kubenswrapper[4824]: I0224 00:17:39.417238 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf" event={"ID":"7191d6cb-0051-4cd2-a93d-a26af6142eb8","Type":"ContainerStarted","Data":"36022042664f199397e29d69a9aa82c3b910fb25a3770a289d7d8170493cb3cf"} Feb 24 00:17:40 crc kubenswrapper[4824]: I0224 00:17:40.425288 4824 generic.go:334] "Generic (PLEG): container finished" podID="5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d" containerID="25b675d6ab9297c81a0362df0ec62867909864db3365cf4211812f0260e4c4fe" exitCode=0 Feb 24 00:17:40 crc kubenswrapper[4824]: I0224 00:17:40.425513 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" event={"ID":"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d","Type":"ContainerDied","Data":"25b675d6ab9297c81a0362df0ec62867909864db3365cf4211812f0260e4c4fe"} Feb 24 00:17:41 crc kubenswrapper[4824]: I0224 00:17:41.669556 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" Feb 24 00:17:41 crc kubenswrapper[4824]: I0224 00:17:41.822936 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d-bundle\") pod \"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d\" (UID: \"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d\") " Feb 24 00:17:41 crc kubenswrapper[4824]: I0224 00:17:41.823071 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d-util\") pod \"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d\" (UID: \"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d\") " Feb 24 00:17:41 crc kubenswrapper[4824]: I0224 00:17:41.823303 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9crzx\" (UniqueName: \"kubernetes.io/projected/5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d-kube-api-access-9crzx\") pod \"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d\" (UID: \"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d\") " Feb 24 00:17:41 crc kubenswrapper[4824]: I0224 00:17:41.825169 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d-bundle" (OuterVolumeSpecName: "bundle") pod "5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d" (UID: "5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:41 crc kubenswrapper[4824]: I0224 00:17:41.832145 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d-kube-api-access-9crzx" (OuterVolumeSpecName: "kube-api-access-9crzx") pod "5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d" (UID: "5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d"). InnerVolumeSpecName "kube-api-access-9crzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:17:41 crc kubenswrapper[4824]: I0224 00:17:41.852100 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d-util" (OuterVolumeSpecName: "util") pod "5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d" (UID: "5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:41 crc kubenswrapper[4824]: I0224 00:17:41.925366 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9crzx\" (UniqueName: \"kubernetes.io/projected/5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d-kube-api-access-9crzx\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:41 crc kubenswrapper[4824]: I0224 00:17:41.925390 4824 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:41 crc kubenswrapper[4824]: I0224 00:17:41.925398 4824 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d-util\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:42 crc kubenswrapper[4824]: I0224 00:17:42.442710 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" event={"ID":"5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d","Type":"ContainerDied","Data":"35bbfc639b02e1e6a0f72cc00f210f348d0be10c023c511eb8a617921f11e775"} Feb 24 00:17:42 crc kubenswrapper[4824]: I0224 00:17:42.442770 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35bbfc639b02e1e6a0f72cc00f210f348d0be10c023c511eb8a617921f11e775" Feb 24 00:17:42 crc kubenswrapper[4824]: I0224 00:17:42.442911 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92" Feb 24 00:17:44 crc kubenswrapper[4824]: I0224 00:17:44.104666 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq"] Feb 24 00:17:44 crc kubenswrapper[4824]: E0224 00:17:44.106199 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d" containerName="util" Feb 24 00:17:44 crc kubenswrapper[4824]: I0224 00:17:44.106291 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d" containerName="util" Feb 24 00:17:44 crc kubenswrapper[4824]: E0224 00:17:44.106370 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d" containerName="pull" Feb 24 00:17:44 crc kubenswrapper[4824]: I0224 00:17:44.106541 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d" containerName="pull" Feb 24 00:17:44 crc kubenswrapper[4824]: E0224 00:17:44.106601 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d" containerName="extract" Feb 24 00:17:44 crc kubenswrapper[4824]: I0224 00:17:44.106677 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d" containerName="extract" Feb 24 00:17:44 crc kubenswrapper[4824]: I0224 00:17:44.106849 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d" containerName="extract" Feb 24 00:17:44 crc kubenswrapper[4824]: I0224 00:17:44.107985 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq" Feb 24 00:17:44 crc kubenswrapper[4824]: I0224 00:17:44.118683 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq"] Feb 24 00:17:44 crc kubenswrapper[4824]: I0224 00:17:44.155449 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbnd2\" (UniqueName: \"kubernetes.io/projected/379ee973-5632-434f-953c-7f23d7dc8f9d-kube-api-access-kbnd2\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq\" (UID: \"379ee973-5632-434f-953c-7f23d7dc8f9d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq" Feb 24 00:17:44 crc kubenswrapper[4824]: I0224 00:17:44.155545 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/379ee973-5632-434f-953c-7f23d7dc8f9d-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq\" (UID: \"379ee973-5632-434f-953c-7f23d7dc8f9d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq" Feb 24 00:17:44 crc kubenswrapper[4824]: I0224 00:17:44.155587 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/379ee973-5632-434f-953c-7f23d7dc8f9d-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq\" (UID: \"379ee973-5632-434f-953c-7f23d7dc8f9d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq" Feb 24 00:17:44 crc kubenswrapper[4824]: I0224 00:17:44.256825 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/379ee973-5632-434f-953c-7f23d7dc8f9d-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq\" (UID: \"379ee973-5632-434f-953c-7f23d7dc8f9d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq" Feb 24 00:17:44 crc kubenswrapper[4824]: I0224 00:17:44.256908 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/379ee973-5632-434f-953c-7f23d7dc8f9d-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq\" (UID: \"379ee973-5632-434f-953c-7f23d7dc8f9d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq" Feb 24 00:17:44 crc kubenswrapper[4824]: I0224 00:17:44.257092 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbnd2\" (UniqueName: \"kubernetes.io/projected/379ee973-5632-434f-953c-7f23d7dc8f9d-kube-api-access-kbnd2\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq\" (UID: \"379ee973-5632-434f-953c-7f23d7dc8f9d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq" Feb 24 00:17:44 crc kubenswrapper[4824]: I0224 00:17:44.257437 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/379ee973-5632-434f-953c-7f23d7dc8f9d-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq\" (UID: \"379ee973-5632-434f-953c-7f23d7dc8f9d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq" Feb 24 00:17:44 crc kubenswrapper[4824]: I0224 00:17:44.257604 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/379ee973-5632-434f-953c-7f23d7dc8f9d-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq\" (UID: \"379ee973-5632-434f-953c-7f23d7dc8f9d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq" Feb 24 00:17:44 crc kubenswrapper[4824]: I0224 00:17:44.275406 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbnd2\" (UniqueName: \"kubernetes.io/projected/379ee973-5632-434f-953c-7f23d7dc8f9d-kube-api-access-kbnd2\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq\" (UID: \"379ee973-5632-434f-953c-7f23d7dc8f9d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq" Feb 24 00:17:44 crc kubenswrapper[4824]: I0224 00:17:44.436436 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq" Feb 24 00:17:44 crc kubenswrapper[4824]: I0224 00:17:44.714914 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq"] Feb 24 00:17:45 crc kubenswrapper[4824]: I0224 00:17:45.465932 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq" event={"ID":"379ee973-5632-434f-953c-7f23d7dc8f9d","Type":"ContainerStarted","Data":"2df8ec581a42c6978d2dab631a2800272eb3bf9842b24962499fc7f88113c8e7"} Feb 24 00:17:46 crc kubenswrapper[4824]: I0224 00:17:46.476409 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" event={"ID":"55bd419c-9f16-434a-9a7f-0693ab6601d4","Type":"ContainerStarted","Data":"3bef49debf6ae8eaef2587932dd58a555a623eb36885e57bf4fd194158de2166"} Feb 24 00:17:47 crc kubenswrapper[4824]: I0224 00:17:47.487731 4824 generic.go:334] "Generic (PLEG): container finished" podID="55bd419c-9f16-434a-9a7f-0693ab6601d4" containerID="3bef49debf6ae8eaef2587932dd58a555a623eb36885e57bf4fd194158de2166" exitCode=0 Feb 24 00:17:47 crc kubenswrapper[4824]: I0224 00:17:47.487794 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" event={"ID":"55bd419c-9f16-434a-9a7f-0693ab6601d4","Type":"ContainerDied","Data":"3bef49debf6ae8eaef2587932dd58a555a623eb36885e57bf4fd194158de2166"} Feb 24 00:17:47 crc kubenswrapper[4824]: I0224 00:17:47.490499 4824 generic.go:334] "Generic (PLEG): container finished" podID="379ee973-5632-434f-953c-7f23d7dc8f9d" containerID="2b933892d11c381c861cca7ebacf1915b4717e2d12d2e77f789b48f0032df3b0" exitCode=0 Feb 24 00:17:47 crc kubenswrapper[4824]: I0224 00:17:47.490563 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq" event={"ID":"379ee973-5632-434f-953c-7f23d7dc8f9d","Type":"ContainerDied","Data":"2b933892d11c381c861cca7ebacf1915b4717e2d12d2e77f789b48f0032df3b0"} Feb 24 00:17:49 crc kubenswrapper[4824]: I0224 00:17:49.508396 4824 generic.go:334] "Generic (PLEG): container finished" podID="7191d6cb-0051-4cd2-a93d-a26af6142eb8" containerID="4f3e7b9f784cba5e2782ce93e66affbdeefd7741a1463a2e557e7ed9b454f55d" exitCode=0 Feb 24 00:17:49 crc kubenswrapper[4824]: I0224 00:17:49.508582 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf" event={"ID":"7191d6cb-0051-4cd2-a93d-a26af6142eb8","Type":"ContainerDied","Data":"4f3e7b9f784cba5e2782ce93e66affbdeefd7741a1463a2e557e7ed9b454f55d"} Feb 24 00:17:49 crc kubenswrapper[4824]: I0224 00:17:49.522162 4824 generic.go:334] "Generic (PLEG): container finished" podID="55bd419c-9f16-434a-9a7f-0693ab6601d4" containerID="4b6b8dfb291b55977edba3b2c382b669bcec9c4777ed38a9500d565db6603bc4" exitCode=0 Feb 24 00:17:49 crc kubenswrapper[4824]: I0224 00:17:49.522224 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" event={"ID":"55bd419c-9f16-434a-9a7f-0693ab6601d4","Type":"ContainerDied","Data":"4b6b8dfb291b55977edba3b2c382b669bcec9c4777ed38a9500d565db6603bc4"} Feb 24 00:17:50 crc kubenswrapper[4824]: I0224 00:17:50.542437 4824 generic.go:334] "Generic (PLEG): container finished" podID="7191d6cb-0051-4cd2-a93d-a26af6142eb8" containerID="3e6e918d489942ec1d3edee5d48dd806e20a4c9ad46747b937d9623720eddc58" exitCode=0 Feb 24 00:17:50 crc kubenswrapper[4824]: I0224 00:17:50.542557 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf" event={"ID":"7191d6cb-0051-4cd2-a93d-a26af6142eb8","Type":"ContainerDied","Data":"3e6e918d489942ec1d3edee5d48dd806e20a4c9ad46747b937d9623720eddc58"} Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.458214 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.459106 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf" Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.582892 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7191d6cb-0051-4cd2-a93d-a26af6142eb8-util\") pod \"7191d6cb-0051-4cd2-a93d-a26af6142eb8\" (UID: \"7191d6cb-0051-4cd2-a93d-a26af6142eb8\") " Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.583220 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h5lz\" (UniqueName: \"kubernetes.io/projected/55bd419c-9f16-434a-9a7f-0693ab6601d4-kube-api-access-5h5lz\") pod \"55bd419c-9f16-434a-9a7f-0693ab6601d4\" (UID: \"55bd419c-9f16-434a-9a7f-0693ab6601d4\") " Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.583246 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzjsj\" (UniqueName: \"kubernetes.io/projected/7191d6cb-0051-4cd2-a93d-a26af6142eb8-kube-api-access-hzjsj\") pod \"7191d6cb-0051-4cd2-a93d-a26af6142eb8\" (UID: \"7191d6cb-0051-4cd2-a93d-a26af6142eb8\") " Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.583268 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55bd419c-9f16-434a-9a7f-0693ab6601d4-bundle\") pod \"55bd419c-9f16-434a-9a7f-0693ab6601d4\" (UID: \"55bd419c-9f16-434a-9a7f-0693ab6601d4\") " Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.583342 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7191d6cb-0051-4cd2-a93d-a26af6142eb8-bundle\") pod \"7191d6cb-0051-4cd2-a93d-a26af6142eb8\" (UID: \"7191d6cb-0051-4cd2-a93d-a26af6142eb8\") " Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.583389 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55bd419c-9f16-434a-9a7f-0693ab6601d4-util\") pod \"55bd419c-9f16-434a-9a7f-0693ab6601d4\" (UID: \"55bd419c-9f16-434a-9a7f-0693ab6601d4\") " Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.599687 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7191d6cb-0051-4cd2-a93d-a26af6142eb8-bundle" (OuterVolumeSpecName: "bundle") pod "7191d6cb-0051-4cd2-a93d-a26af6142eb8" (UID: "7191d6cb-0051-4cd2-a93d-a26af6142eb8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.601649 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55bd419c-9f16-434a-9a7f-0693ab6601d4-bundle" (OuterVolumeSpecName: "bundle") pod "55bd419c-9f16-434a-9a7f-0693ab6601d4" (UID: "55bd419c-9f16-434a-9a7f-0693ab6601d4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.612446 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf" event={"ID":"7191d6cb-0051-4cd2-a93d-a26af6142eb8","Type":"ContainerDied","Data":"36022042664f199397e29d69a9aa82c3b910fb25a3770a289d7d8170493cb3cf"} Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.612492 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36022042664f199397e29d69a9aa82c3b910fb25a3770a289d7d8170493cb3cf" Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.612576 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf" Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.615344 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" event={"ID":"55bd419c-9f16-434a-9a7f-0693ab6601d4","Type":"ContainerDied","Data":"3a70ebd7757696ac2b83b90dea2f46e099f4cc03587c64932de0b8718290ef9a"} Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.615367 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a70ebd7757696ac2b83b90dea2f46e099f4cc03587c64932de0b8718290ef9a" Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.615407 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw" Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.616552 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55bd419c-9f16-434a-9a7f-0693ab6601d4-util" (OuterVolumeSpecName: "util") pod "55bd419c-9f16-434a-9a7f-0693ab6601d4" (UID: "55bd419c-9f16-434a-9a7f-0693ab6601d4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.620762 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7191d6cb-0051-4cd2-a93d-a26af6142eb8-kube-api-access-hzjsj" (OuterVolumeSpecName: "kube-api-access-hzjsj") pod "7191d6cb-0051-4cd2-a93d-a26af6142eb8" (UID: "7191d6cb-0051-4cd2-a93d-a26af6142eb8"). InnerVolumeSpecName "kube-api-access-hzjsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.633896 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7191d6cb-0051-4cd2-a93d-a26af6142eb8-util" (OuterVolumeSpecName: "util") pod "7191d6cb-0051-4cd2-a93d-a26af6142eb8" (UID: "7191d6cb-0051-4cd2-a93d-a26af6142eb8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.639422 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55bd419c-9f16-434a-9a7f-0693ab6601d4-kube-api-access-5h5lz" (OuterVolumeSpecName: "kube-api-access-5h5lz") pod "55bd419c-9f16-434a-9a7f-0693ab6601d4" (UID: "55bd419c-9f16-434a-9a7f-0693ab6601d4"). InnerVolumeSpecName "kube-api-access-5h5lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.685032 4824 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7191d6cb-0051-4cd2-a93d-a26af6142eb8-util\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.685073 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h5lz\" (UniqueName: \"kubernetes.io/projected/55bd419c-9f16-434a-9a7f-0693ab6601d4-kube-api-access-5h5lz\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.685086 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzjsj\" (UniqueName: \"kubernetes.io/projected/7191d6cb-0051-4cd2-a93d-a26af6142eb8-kube-api-access-hzjsj\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.685095 4824 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55bd419c-9f16-434a-9a7f-0693ab6601d4-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.685102 4824 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7191d6cb-0051-4cd2-a93d-a26af6142eb8-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:53 crc kubenswrapper[4824]: I0224 00:17:53.685111 4824 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55bd419c-9f16-434a-9a7f-0693ab6601d4-util\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.576438 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-df47j"] Feb 24 00:17:54 crc kubenswrapper[4824]: E0224 00:17:54.576726 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7191d6cb-0051-4cd2-a93d-a26af6142eb8" containerName="util" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.576744 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="7191d6cb-0051-4cd2-a93d-a26af6142eb8" containerName="util" Feb 24 00:17:54 crc kubenswrapper[4824]: E0224 00:17:54.576762 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55bd419c-9f16-434a-9a7f-0693ab6601d4" containerName="extract" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.576769 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="55bd419c-9f16-434a-9a7f-0693ab6601d4" containerName="extract" Feb 24 00:17:54 crc kubenswrapper[4824]: E0224 00:17:54.576785 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55bd419c-9f16-434a-9a7f-0693ab6601d4" containerName="pull" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.576794 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="55bd419c-9f16-434a-9a7f-0693ab6601d4" containerName="pull" Feb 24 00:17:54 crc kubenswrapper[4824]: E0224 00:17:54.576808 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7191d6cb-0051-4cd2-a93d-a26af6142eb8" containerName="extract" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.576815 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="7191d6cb-0051-4cd2-a93d-a26af6142eb8" containerName="extract" Feb 24 00:17:54 crc kubenswrapper[4824]: E0224 00:17:54.576829 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55bd419c-9f16-434a-9a7f-0693ab6601d4" containerName="util" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.576836 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="55bd419c-9f16-434a-9a7f-0693ab6601d4" containerName="util" Feb 24 00:17:54 crc kubenswrapper[4824]: E0224 00:17:54.576843 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7191d6cb-0051-4cd2-a93d-a26af6142eb8" containerName="pull" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.576850 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="7191d6cb-0051-4cd2-a93d-a26af6142eb8" containerName="pull" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.576956 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="55bd419c-9f16-434a-9a7f-0693ab6601d4" containerName="extract" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.576974 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="7191d6cb-0051-4cd2-a93d-a26af6142eb8" containerName="extract" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.577488 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-df47j" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.580440 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-7b6m7" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.581153 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.581549 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.592612 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-df47j"] Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.595808 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8xrt\" (UniqueName: \"kubernetes.io/projected/02a08fee-e933-4730-8755-7419c78d6525-kube-api-access-k8xrt\") pod \"obo-prometheus-operator-68bc856cb9-df47j\" (UID: \"02a08fee-e933-4730-8755-7419c78d6525\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-df47j" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.632391 4824 generic.go:334] "Generic (PLEG): container finished" podID="379ee973-5632-434f-953c-7f23d7dc8f9d" containerID="fbf3348138844ceae4d727ff46350f42697144a3c6384b33631af75832a5090a" exitCode=0 Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.632447 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq" event={"ID":"379ee973-5632-434f-953c-7f23d7dc8f9d","Type":"ContainerDied","Data":"fbf3348138844ceae4d727ff46350f42697144a3c6384b33631af75832a5090a"} Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.696435 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8xrt\" (UniqueName: \"kubernetes.io/projected/02a08fee-e933-4730-8755-7419c78d6525-kube-api-access-k8xrt\") pod \"obo-prometheus-operator-68bc856cb9-df47j\" (UID: \"02a08fee-e933-4730-8755-7419c78d6525\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-df47j" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.718452 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8xrt\" (UniqueName: \"kubernetes.io/projected/02a08fee-e933-4730-8755-7419c78d6525-kube-api-access-k8xrt\") pod \"obo-prometheus-operator-68bc856cb9-df47j\" (UID: \"02a08fee-e933-4730-8755-7419c78d6525\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-df47j" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.738472 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf"] Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.739366 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.742131 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-rbpr6" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.743094 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.753613 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s"] Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.755017 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.764619 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf"] Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.799505 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c2534a7-fe82-47e6-b6f9-2bf928bf7c9a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s\" (UID: \"7c2534a7-fe82-47e6-b6f9-2bf928bf7c9a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.799648 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c2534a7-fe82-47e6-b6f9-2bf928bf7c9a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s\" (UID: \"7c2534a7-fe82-47e6-b6f9-2bf928bf7c9a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.799694 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/350461e1-7bfd-4095-9d74-4c3df3159694-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf\" (UID: \"350461e1-7bfd-4095-9d74-4c3df3159694\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.799743 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/350461e1-7bfd-4095-9d74-4c3df3159694-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf\" (UID: \"350461e1-7bfd-4095-9d74-4c3df3159694\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.800849 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s"] Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.892944 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-df47j" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.901690 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c2534a7-fe82-47e6-b6f9-2bf928bf7c9a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s\" (UID: \"7c2534a7-fe82-47e6-b6f9-2bf928bf7c9a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.901770 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/350461e1-7bfd-4095-9d74-4c3df3159694-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf\" (UID: \"350461e1-7bfd-4095-9d74-4c3df3159694\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.901822 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/350461e1-7bfd-4095-9d74-4c3df3159694-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf\" (UID: \"350461e1-7bfd-4095-9d74-4c3df3159694\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.901869 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c2534a7-fe82-47e6-b6f9-2bf928bf7c9a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s\" (UID: \"7c2534a7-fe82-47e6-b6f9-2bf928bf7c9a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.907080 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/350461e1-7bfd-4095-9d74-4c3df3159694-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf\" (UID: \"350461e1-7bfd-4095-9d74-4c3df3159694\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.909765 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7c2534a7-fe82-47e6-b6f9-2bf928bf7c9a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s\" (UID: \"7c2534a7-fe82-47e6-b6f9-2bf928bf7c9a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.911935 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/350461e1-7bfd-4095-9d74-4c3df3159694-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf\" (UID: \"350461e1-7bfd-4095-9d74-4c3df3159694\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.915166 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c2534a7-fe82-47e6-b6f9-2bf928bf7c9a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s\" (UID: \"7c2534a7-fe82-47e6-b6f9-2bf928bf7c9a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.959585 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-hhf7q"] Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.961304 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-hhf7q" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.971910 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-2n6q8" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.972129 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 24 00:17:54 crc kubenswrapper[4824]: I0224 00:17:54.977104 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-hhf7q"] Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.005119 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2mj5\" (UniqueName: \"kubernetes.io/projected/823099c2-9764-455a-a682-57c154c0d895-kube-api-access-c2mj5\") pod \"observability-operator-59bdc8b94-hhf7q\" (UID: \"823099c2-9764-455a-a682-57c154c0d895\") " pod="openshift-operators/observability-operator-59bdc8b94-hhf7q" Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.005225 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/823099c2-9764-455a-a682-57c154c0d895-observability-operator-tls\") pod \"observability-operator-59bdc8b94-hhf7q\" (UID: \"823099c2-9764-455a-a682-57c154c0d895\") " pod="openshift-operators/observability-operator-59bdc8b94-hhf7q" Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.061806 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf" Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.091343 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s" Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.095854 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-frbxc"] Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.096593 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-frbxc" Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.111643 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-n86jq" Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.112270 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2mj5\" (UniqueName: \"kubernetes.io/projected/823099c2-9764-455a-a682-57c154c0d895-kube-api-access-c2mj5\") pod \"observability-operator-59bdc8b94-hhf7q\" (UID: \"823099c2-9764-455a-a682-57c154c0d895\") " pod="openshift-operators/observability-operator-59bdc8b94-hhf7q" Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.112325 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bdh4\" (UniqueName: \"kubernetes.io/projected/885263fe-5a06-4089-b662-d3e4dbc7d08e-kube-api-access-8bdh4\") pod \"perses-operator-5bf474d74f-frbxc\" (UID: \"885263fe-5a06-4089-b662-d3e4dbc7d08e\") " pod="openshift-operators/perses-operator-5bf474d74f-frbxc" Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.112361 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/885263fe-5a06-4089-b662-d3e4dbc7d08e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-frbxc\" (UID: \"885263fe-5a06-4089-b662-d3e4dbc7d08e\") " pod="openshift-operators/perses-operator-5bf474d74f-frbxc" Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.112391 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/823099c2-9764-455a-a682-57c154c0d895-observability-operator-tls\") pod \"observability-operator-59bdc8b94-hhf7q\" (UID: \"823099c2-9764-455a-a682-57c154c0d895\") " pod="openshift-operators/observability-operator-59bdc8b94-hhf7q" Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.116963 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/823099c2-9764-455a-a682-57c154c0d895-observability-operator-tls\") pod \"observability-operator-59bdc8b94-hhf7q\" (UID: \"823099c2-9764-455a-a682-57c154c0d895\") " pod="openshift-operators/observability-operator-59bdc8b94-hhf7q" Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.131210 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-frbxc"] Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.140430 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2mj5\" (UniqueName: \"kubernetes.io/projected/823099c2-9764-455a-a682-57c154c0d895-kube-api-access-c2mj5\") pod \"observability-operator-59bdc8b94-hhf7q\" (UID: \"823099c2-9764-455a-a682-57c154c0d895\") " pod="openshift-operators/observability-operator-59bdc8b94-hhf7q" Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.213631 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bdh4\" (UniqueName: \"kubernetes.io/projected/885263fe-5a06-4089-b662-d3e4dbc7d08e-kube-api-access-8bdh4\") pod \"perses-operator-5bf474d74f-frbxc\" (UID: \"885263fe-5a06-4089-b662-d3e4dbc7d08e\") " pod="openshift-operators/perses-operator-5bf474d74f-frbxc" Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.213728 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/885263fe-5a06-4089-b662-d3e4dbc7d08e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-frbxc\" (UID: \"885263fe-5a06-4089-b662-d3e4dbc7d08e\") " pod="openshift-operators/perses-operator-5bf474d74f-frbxc" Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.215192 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/885263fe-5a06-4089-b662-d3e4dbc7d08e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-frbxc\" (UID: \"885263fe-5a06-4089-b662-d3e4dbc7d08e\") " pod="openshift-operators/perses-operator-5bf474d74f-frbxc" Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.247305 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bdh4\" (UniqueName: \"kubernetes.io/projected/885263fe-5a06-4089-b662-d3e4dbc7d08e-kube-api-access-8bdh4\") pod \"perses-operator-5bf474d74f-frbxc\" (UID: \"885263fe-5a06-4089-b662-d3e4dbc7d08e\") " pod="openshift-operators/perses-operator-5bf474d74f-frbxc" Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.278337 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-hhf7q" Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.389561 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-df47j"] Feb 24 00:17:55 crc kubenswrapper[4824]: W0224 00:17:55.423082 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02a08fee_e933_4730_8755_7419c78d6525.slice/crio-24f184ffaaad436a493cca77ae22e0bba5d6a647fd6cfeab4abc2a00a2e13d75 WatchSource:0}: Error finding container 24f184ffaaad436a493cca77ae22e0bba5d6a647fd6cfeab4abc2a00a2e13d75: Status 404 returned error can't find the container with id 24f184ffaaad436a493cca77ae22e0bba5d6a647fd6cfeab4abc2a00a2e13d75 Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.450930 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-frbxc" Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.493544 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s"] Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.566194 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-hhf7q"] Feb 24 00:17:55 crc kubenswrapper[4824]: W0224 00:17:55.587575 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod823099c2_9764_455a_a682_57c154c0d895.slice/crio-07e543b1d4d85b43dfc669fd140a601c0289bc8779bdc1d5bb9502005b69d6b6 WatchSource:0}: Error finding container 07e543b1d4d85b43dfc669fd140a601c0289bc8779bdc1d5bb9502005b69d6b6: Status 404 returned error can't find the container with id 07e543b1d4d85b43dfc669fd140a601c0289bc8779bdc1d5bb9502005b69d6b6 Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.629734 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf"] Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.639294 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-df47j" event={"ID":"02a08fee-e933-4730-8755-7419c78d6525","Type":"ContainerStarted","Data":"24f184ffaaad436a493cca77ae22e0bba5d6a647fd6cfeab4abc2a00a2e13d75"} Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.640162 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-hhf7q" event={"ID":"823099c2-9764-455a-a682-57c154c0d895","Type":"ContainerStarted","Data":"07e543b1d4d85b43dfc669fd140a601c0289bc8779bdc1d5bb9502005b69d6b6"} Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.646618 4824 generic.go:334] "Generic (PLEG): container finished" podID="379ee973-5632-434f-953c-7f23d7dc8f9d" containerID="62713628ab0cf1b8ac665aa19b02a03c9b8eeac677ad129a17555f65c436b0bc" exitCode=0 Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.646712 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq" event={"ID":"379ee973-5632-434f-953c-7f23d7dc8f9d","Type":"ContainerDied","Data":"62713628ab0cf1b8ac665aa19b02a03c9b8eeac677ad129a17555f65c436b0bc"} Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.652402 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s" event={"ID":"7c2534a7-fe82-47e6-b6f9-2bf928bf7c9a","Type":"ContainerStarted","Data":"fe35a49ff8f75dbdc7344b5aa6d6833d5b42762e690827a1554252b0797cfd87"} Feb 24 00:17:55 crc kubenswrapper[4824]: I0224 00:17:55.915009 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-frbxc"] Feb 24 00:17:55 crc kubenswrapper[4824]: W0224 00:17:55.922525 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod885263fe_5a06_4089_b662_d3e4dbc7d08e.slice/crio-7bcb24956391b4d5f6f8854317d49fadc6064ce27b580db74115c7ba7f08e478 WatchSource:0}: Error finding container 7bcb24956391b4d5f6f8854317d49fadc6064ce27b580db74115c7ba7f08e478: Status 404 returned error can't find the container with id 7bcb24956391b4d5f6f8854317d49fadc6064ce27b580db74115c7ba7f08e478 Feb 24 00:17:56 crc kubenswrapper[4824]: I0224 00:17:56.664995 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf" event={"ID":"350461e1-7bfd-4095-9d74-4c3df3159694","Type":"ContainerStarted","Data":"7ec40daae64b22521d2710cb1c76929ab29f88eb8684bc42046b3f06a20a2438"} Feb 24 00:17:56 crc kubenswrapper[4824]: I0224 00:17:56.667964 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-frbxc" event={"ID":"885263fe-5a06-4089-b662-d3e4dbc7d08e","Type":"ContainerStarted","Data":"7bcb24956391b4d5f6f8854317d49fadc6064ce27b580db74115c7ba7f08e478"} Feb 24 00:17:57 crc kubenswrapper[4824]: I0224 00:17:57.079452 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq" Feb 24 00:17:57 crc kubenswrapper[4824]: I0224 00:17:57.150755 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbnd2\" (UniqueName: \"kubernetes.io/projected/379ee973-5632-434f-953c-7f23d7dc8f9d-kube-api-access-kbnd2\") pod \"379ee973-5632-434f-953c-7f23d7dc8f9d\" (UID: \"379ee973-5632-434f-953c-7f23d7dc8f9d\") " Feb 24 00:17:57 crc kubenswrapper[4824]: I0224 00:17:57.150844 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/379ee973-5632-434f-953c-7f23d7dc8f9d-util\") pod \"379ee973-5632-434f-953c-7f23d7dc8f9d\" (UID: \"379ee973-5632-434f-953c-7f23d7dc8f9d\") " Feb 24 00:17:57 crc kubenswrapper[4824]: I0224 00:17:57.150987 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/379ee973-5632-434f-953c-7f23d7dc8f9d-bundle\") pod \"379ee973-5632-434f-953c-7f23d7dc8f9d\" (UID: \"379ee973-5632-434f-953c-7f23d7dc8f9d\") " Feb 24 00:17:57 crc kubenswrapper[4824]: I0224 00:17:57.152721 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/379ee973-5632-434f-953c-7f23d7dc8f9d-bundle" (OuterVolumeSpecName: "bundle") pod "379ee973-5632-434f-953c-7f23d7dc8f9d" (UID: "379ee973-5632-434f-953c-7f23d7dc8f9d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:57 crc kubenswrapper[4824]: I0224 00:17:57.157613 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/379ee973-5632-434f-953c-7f23d7dc8f9d-kube-api-access-kbnd2" (OuterVolumeSpecName: "kube-api-access-kbnd2") pod "379ee973-5632-434f-953c-7f23d7dc8f9d" (UID: "379ee973-5632-434f-953c-7f23d7dc8f9d"). InnerVolumeSpecName "kube-api-access-kbnd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:17:57 crc kubenswrapper[4824]: I0224 00:17:57.163773 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/379ee973-5632-434f-953c-7f23d7dc8f9d-util" (OuterVolumeSpecName: "util") pod "379ee973-5632-434f-953c-7f23d7dc8f9d" (UID: "379ee973-5632-434f-953c-7f23d7dc8f9d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:57 crc kubenswrapper[4824]: I0224 00:17:57.253134 4824 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/379ee973-5632-434f-953c-7f23d7dc8f9d-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:57 crc kubenswrapper[4824]: I0224 00:17:57.253187 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbnd2\" (UniqueName: \"kubernetes.io/projected/379ee973-5632-434f-953c-7f23d7dc8f9d-kube-api-access-kbnd2\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:57 crc kubenswrapper[4824]: I0224 00:17:57.253199 4824 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/379ee973-5632-434f-953c-7f23d7dc8f9d-util\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:57 crc kubenswrapper[4824]: I0224 00:17:57.680308 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq" event={"ID":"379ee973-5632-434f-953c-7f23d7dc8f9d","Type":"ContainerDied","Data":"2df8ec581a42c6978d2dab631a2800272eb3bf9842b24962499fc7f88113c8e7"} Feb 24 00:17:57 crc kubenswrapper[4824]: I0224 00:17:57.680359 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2df8ec581a42c6978d2dab631a2800272eb3bf9842b24962499fc7f88113c8e7" Feb 24 00:17:57 crc kubenswrapper[4824]: I0224 00:17:57.680432 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq" Feb 24 00:18:02 crc kubenswrapper[4824]: I0224 00:18:02.445307 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-7d9xq"] Feb 24 00:18:02 crc kubenswrapper[4824]: E0224 00:18:02.446118 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="379ee973-5632-434f-953c-7f23d7dc8f9d" containerName="pull" Feb 24 00:18:02 crc kubenswrapper[4824]: I0224 00:18:02.446136 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="379ee973-5632-434f-953c-7f23d7dc8f9d" containerName="pull" Feb 24 00:18:02 crc kubenswrapper[4824]: E0224 00:18:02.446149 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="379ee973-5632-434f-953c-7f23d7dc8f9d" containerName="util" Feb 24 00:18:02 crc kubenswrapper[4824]: I0224 00:18:02.446155 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="379ee973-5632-434f-953c-7f23d7dc8f9d" containerName="util" Feb 24 00:18:02 crc kubenswrapper[4824]: E0224 00:18:02.446170 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="379ee973-5632-434f-953c-7f23d7dc8f9d" containerName="extract" Feb 24 00:18:02 crc kubenswrapper[4824]: I0224 00:18:02.446177 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="379ee973-5632-434f-953c-7f23d7dc8f9d" containerName="extract" Feb 24 00:18:02 crc kubenswrapper[4824]: I0224 00:18:02.446305 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="379ee973-5632-434f-953c-7f23d7dc8f9d" containerName="extract" Feb 24 00:18:02 crc kubenswrapper[4824]: I0224 00:18:02.446718 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-7d9xq" Feb 24 00:18:02 crc kubenswrapper[4824]: I0224 00:18:02.450595 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-54htf" Feb 24 00:18:02 crc kubenswrapper[4824]: I0224 00:18:02.450755 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Feb 24 00:18:02 crc kubenswrapper[4824]: I0224 00:18:02.451229 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Feb 24 00:18:02 crc kubenswrapper[4824]: I0224 00:18:02.459624 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-7d9xq"] Feb 24 00:18:02 crc kubenswrapper[4824]: I0224 00:18:02.569940 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92shp\" (UniqueName: \"kubernetes.io/projected/125693c0-b095-4b7e-9ce3-b96785d4198e-kube-api-access-92shp\") pod \"interconnect-operator-5bb49f789d-7d9xq\" (UID: \"125693c0-b095-4b7e-9ce3-b96785d4198e\") " pod="service-telemetry/interconnect-operator-5bb49f789d-7d9xq" Feb 24 00:18:02 crc kubenswrapper[4824]: I0224 00:18:02.671108 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92shp\" (UniqueName: \"kubernetes.io/projected/125693c0-b095-4b7e-9ce3-b96785d4198e-kube-api-access-92shp\") pod \"interconnect-operator-5bb49f789d-7d9xq\" (UID: \"125693c0-b095-4b7e-9ce3-b96785d4198e\") " pod="service-telemetry/interconnect-operator-5bb49f789d-7d9xq" Feb 24 00:18:02 crc kubenswrapper[4824]: I0224 00:18:02.700632 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92shp\" (UniqueName: \"kubernetes.io/projected/125693c0-b095-4b7e-9ce3-b96785d4198e-kube-api-access-92shp\") pod \"interconnect-operator-5bb49f789d-7d9xq\" (UID: \"125693c0-b095-4b7e-9ce3-b96785d4198e\") " pod="service-telemetry/interconnect-operator-5bb49f789d-7d9xq" Feb 24 00:18:02 crc kubenswrapper[4824]: I0224 00:18:02.768493 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-7d9xq" Feb 24 00:18:05 crc kubenswrapper[4824]: I0224 00:18:05.693948 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-788857d49f-cs5c7"] Feb 24 00:18:05 crc kubenswrapper[4824]: I0224 00:18:05.695550 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-788857d49f-cs5c7" Feb 24 00:18:05 crc kubenswrapper[4824]: I0224 00:18:05.698751 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-59f26" Feb 24 00:18:05 crc kubenswrapper[4824]: I0224 00:18:05.699066 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Feb 24 00:18:05 crc kubenswrapper[4824]: I0224 00:18:05.718754 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-788857d49f-cs5c7"] Feb 24 00:18:05 crc kubenswrapper[4824]: I0224 00:18:05.821003 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/43133e56-401c-453a-a59c-723bd8301fce-webhook-cert\") pod \"elastic-operator-788857d49f-cs5c7\" (UID: \"43133e56-401c-453a-a59c-723bd8301fce\") " pod="service-telemetry/elastic-operator-788857d49f-cs5c7" Feb 24 00:18:05 crc kubenswrapper[4824]: I0224 00:18:05.821053 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-629r9\" (UniqueName: \"kubernetes.io/projected/43133e56-401c-453a-a59c-723bd8301fce-kube-api-access-629r9\") pod \"elastic-operator-788857d49f-cs5c7\" (UID: \"43133e56-401c-453a-a59c-723bd8301fce\") " pod="service-telemetry/elastic-operator-788857d49f-cs5c7" Feb 24 00:18:05 crc kubenswrapper[4824]: I0224 00:18:05.821093 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/43133e56-401c-453a-a59c-723bd8301fce-apiservice-cert\") pod \"elastic-operator-788857d49f-cs5c7\" (UID: \"43133e56-401c-453a-a59c-723bd8301fce\") " pod="service-telemetry/elastic-operator-788857d49f-cs5c7" Feb 24 00:18:05 crc kubenswrapper[4824]: I0224 00:18:05.895378 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-7d9xq"] Feb 24 00:18:05 crc kubenswrapper[4824]: I0224 00:18:05.922424 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/43133e56-401c-453a-a59c-723bd8301fce-apiservice-cert\") pod \"elastic-operator-788857d49f-cs5c7\" (UID: \"43133e56-401c-453a-a59c-723bd8301fce\") " pod="service-telemetry/elastic-operator-788857d49f-cs5c7" Feb 24 00:18:05 crc kubenswrapper[4824]: I0224 00:18:05.922567 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/43133e56-401c-453a-a59c-723bd8301fce-webhook-cert\") pod \"elastic-operator-788857d49f-cs5c7\" (UID: \"43133e56-401c-453a-a59c-723bd8301fce\") " pod="service-telemetry/elastic-operator-788857d49f-cs5c7" Feb 24 00:18:05 crc kubenswrapper[4824]: I0224 00:18:05.922625 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-629r9\" (UniqueName: \"kubernetes.io/projected/43133e56-401c-453a-a59c-723bd8301fce-kube-api-access-629r9\") pod \"elastic-operator-788857d49f-cs5c7\" (UID: \"43133e56-401c-453a-a59c-723bd8301fce\") " pod="service-telemetry/elastic-operator-788857d49f-cs5c7" Feb 24 00:18:05 crc kubenswrapper[4824]: I0224 00:18:05.938222 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/43133e56-401c-453a-a59c-723bd8301fce-webhook-cert\") pod \"elastic-operator-788857d49f-cs5c7\" (UID: \"43133e56-401c-453a-a59c-723bd8301fce\") " pod="service-telemetry/elastic-operator-788857d49f-cs5c7" Feb 24 00:18:05 crc kubenswrapper[4824]: I0224 00:18:05.945390 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/43133e56-401c-453a-a59c-723bd8301fce-apiservice-cert\") pod \"elastic-operator-788857d49f-cs5c7\" (UID: \"43133e56-401c-453a-a59c-723bd8301fce\") " pod="service-telemetry/elastic-operator-788857d49f-cs5c7" Feb 24 00:18:05 crc kubenswrapper[4824]: I0224 00:18:05.957692 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-629r9\" (UniqueName: \"kubernetes.io/projected/43133e56-401c-453a-a59c-723bd8301fce-kube-api-access-629r9\") pod \"elastic-operator-788857d49f-cs5c7\" (UID: \"43133e56-401c-453a-a59c-723bd8301fce\") " pod="service-telemetry/elastic-operator-788857d49f-cs5c7" Feb 24 00:18:06 crc kubenswrapper[4824]: I0224 00:18:06.061777 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-788857d49f-cs5c7" Feb 24 00:18:06 crc kubenswrapper[4824]: I0224 00:18:06.510167 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-788857d49f-cs5c7"] Feb 24 00:18:06 crc kubenswrapper[4824]: I0224 00:18:06.758682 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-7d9xq" event={"ID":"125693c0-b095-4b7e-9ce3-b96785d4198e","Type":"ContainerStarted","Data":"b01bec5372b9cc3ce96768d6fef57c3c7da367ac60e8d83d635408a6c88e89f6"} Feb 24 00:18:06 crc kubenswrapper[4824]: I0224 00:18:06.760242 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-df47j" event={"ID":"02a08fee-e933-4730-8755-7419c78d6525","Type":"ContainerStarted","Data":"fcf1371bb8ce2cc80205fc8f654b7e6018324f4ee7dc104874aba26176cc50ee"} Feb 24 00:18:06 crc kubenswrapper[4824]: I0224 00:18:06.761445 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf" event={"ID":"350461e1-7bfd-4095-9d74-4c3df3159694","Type":"ContainerStarted","Data":"469cf9061ae35cdb46a5045e32bcd68f5d4c19f070a6379c8b6c993146006b4e"} Feb 24 00:18:06 crc kubenswrapper[4824]: I0224 00:18:06.762484 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-788857d49f-cs5c7" event={"ID":"43133e56-401c-453a-a59c-723bd8301fce","Type":"ContainerStarted","Data":"9d1a26e622fac907f6f6b9e2185013009b4eada315908df1b22e0905ac4fe7e4"} Feb 24 00:18:06 crc kubenswrapper[4824]: I0224 00:18:06.763763 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-hhf7q" event={"ID":"823099c2-9764-455a-a682-57c154c0d895","Type":"ContainerStarted","Data":"0e9fe346b6722a8934542c82f6e4c0d4afae3e66748496f119aeb2f964d68642"} Feb 24 00:18:06 crc kubenswrapper[4824]: I0224 00:18:06.764426 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-hhf7q" Feb 24 00:18:06 crc kubenswrapper[4824]: I0224 00:18:06.766194 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s" event={"ID":"7c2534a7-fe82-47e6-b6f9-2bf928bf7c9a","Type":"ContainerStarted","Data":"fe0f410b8e4180707913c032dbf885915f74c199aaa682de4fbbcb68912bec25"} Feb 24 00:18:06 crc kubenswrapper[4824]: I0224 00:18:06.768998 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-frbxc" event={"ID":"885263fe-5a06-4089-b662-d3e4dbc7d08e","Type":"ContainerStarted","Data":"ce2508a2ea2ff5db86300d0cac7cebdec925662895fa099c1a5d4c296344b739"} Feb 24 00:18:06 crc kubenswrapper[4824]: I0224 00:18:06.769387 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-frbxc" Feb 24 00:18:06 crc kubenswrapper[4824]: I0224 00:18:06.780888 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-df47j" podStartSLOduration=2.672208942 podStartE2EDuration="12.780864244s" podCreationTimestamp="2026-02-24 00:17:54 +0000 UTC" firstStartedPulling="2026-02-24 00:17:55.427072256 +0000 UTC m=+739.416696725" lastFinishedPulling="2026-02-24 00:18:05.535727558 +0000 UTC m=+749.525352027" observedRunningTime="2026-02-24 00:18:06.775683421 +0000 UTC m=+750.765307890" watchObservedRunningTime="2026-02-24 00:18:06.780864244 +0000 UTC m=+750.770488723" Feb 24 00:18:06 crc kubenswrapper[4824]: I0224 00:18:06.788893 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-hhf7q" Feb 24 00:18:06 crc kubenswrapper[4824]: I0224 00:18:06.802046 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-frbxc" podStartSLOduration=2.229990546 podStartE2EDuration="11.802029557s" podCreationTimestamp="2026-02-24 00:17:55 +0000 UTC" firstStartedPulling="2026-02-24 00:17:55.925495886 +0000 UTC m=+739.915120345" lastFinishedPulling="2026-02-24 00:18:05.497534887 +0000 UTC m=+749.487159356" observedRunningTime="2026-02-24 00:18:06.799822781 +0000 UTC m=+750.789447250" watchObservedRunningTime="2026-02-24 00:18:06.802029557 +0000 UTC m=+750.791654026" Feb 24 00:18:06 crc kubenswrapper[4824]: I0224 00:18:06.831745 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-hhf7q" podStartSLOduration=2.88860733 podStartE2EDuration="12.83172331s" podCreationTimestamp="2026-02-24 00:17:54 +0000 UTC" firstStartedPulling="2026-02-24 00:17:55.59465367 +0000 UTC m=+739.584278139" lastFinishedPulling="2026-02-24 00:18:05.53776965 +0000 UTC m=+749.527394119" observedRunningTime="2026-02-24 00:18:06.829743359 +0000 UTC m=+750.819367838" watchObservedRunningTime="2026-02-24 00:18:06.83172331 +0000 UTC m=+750.821347779" Feb 24 00:18:06 crc kubenswrapper[4824]: I0224 00:18:06.858796 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s" podStartSLOduration=2.905622326 podStartE2EDuration="12.858773314s" podCreationTimestamp="2026-02-24 00:17:54 +0000 UTC" firstStartedPulling="2026-02-24 00:17:55.51171837 +0000 UTC m=+739.501342839" lastFinishedPulling="2026-02-24 00:18:05.464869358 +0000 UTC m=+749.454493827" observedRunningTime="2026-02-24 00:18:06.854273299 +0000 UTC m=+750.843897768" watchObservedRunningTime="2026-02-24 00:18:06.858773314 +0000 UTC m=+750.848397783" Feb 24 00:18:06 crc kubenswrapper[4824]: I0224 00:18:06.880186 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf" podStartSLOduration=3.050625551 podStartE2EDuration="12.880166594s" podCreationTimestamp="2026-02-24 00:17:54 +0000 UTC" firstStartedPulling="2026-02-24 00:17:55.646156223 +0000 UTC m=+739.635780692" lastFinishedPulling="2026-02-24 00:18:05.475697266 +0000 UTC m=+749.465321735" observedRunningTime="2026-02-24 00:18:06.878619204 +0000 UTC m=+750.868243703" watchObservedRunningTime="2026-02-24 00:18:06.880166594 +0000 UTC m=+750.869791063" Feb 24 00:18:09 crc kubenswrapper[4824]: I0224 00:18:09.790345 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-788857d49f-cs5c7" event={"ID":"43133e56-401c-453a-a59c-723bd8301fce","Type":"ContainerStarted","Data":"bbdb91bd5a692aeab89a376c7905813cf7aebf46e06652d8c2e2094e4736d44c"} Feb 24 00:18:09 crc kubenswrapper[4824]: I0224 00:18:09.811711 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-788857d49f-cs5c7" podStartSLOduration=2.169936056 podStartE2EDuration="4.811687206s" podCreationTimestamp="2026-02-24 00:18:05 +0000 UTC" firstStartedPulling="2026-02-24 00:18:06.537872024 +0000 UTC m=+750.527496493" lastFinishedPulling="2026-02-24 00:18:09.179623174 +0000 UTC m=+753.169247643" observedRunningTime="2026-02-24 00:18:09.808088823 +0000 UTC m=+753.797713302" watchObservedRunningTime="2026-02-24 00:18:09.811687206 +0000 UTC m=+753.801311675" Feb 24 00:18:13 crc kubenswrapper[4824]: I0224 00:18:13.216467 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-p78vd"] Feb 24 00:18:13 crc kubenswrapper[4824]: I0224 00:18:13.217773 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-p78vd" Feb 24 00:18:13 crc kubenswrapper[4824]: I0224 00:18:13.219654 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 24 00:18:13 crc kubenswrapper[4824]: I0224 00:18:13.219985 4824 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-jkwbj" Feb 24 00:18:13 crc kubenswrapper[4824]: I0224 00:18:13.220112 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 24 00:18:13 crc kubenswrapper[4824]: I0224 00:18:13.239439 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-p78vd"] Feb 24 00:18:13 crc kubenswrapper[4824]: I0224 00:18:13.323895 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af381dba-8d03-4a1d-94a5-cd8a45dbc318-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-p78vd\" (UID: \"af381dba-8d03-4a1d-94a5-cd8a45dbc318\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-p78vd" Feb 24 00:18:13 crc kubenswrapper[4824]: I0224 00:18:13.323966 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b2pt\" (UniqueName: \"kubernetes.io/projected/af381dba-8d03-4a1d-94a5-cd8a45dbc318-kube-api-access-7b2pt\") pod \"cert-manager-operator-controller-manager-5586865c96-p78vd\" (UID: \"af381dba-8d03-4a1d-94a5-cd8a45dbc318\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-p78vd" Feb 24 00:18:13 crc kubenswrapper[4824]: I0224 00:18:13.425758 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b2pt\" (UniqueName: \"kubernetes.io/projected/af381dba-8d03-4a1d-94a5-cd8a45dbc318-kube-api-access-7b2pt\") pod \"cert-manager-operator-controller-manager-5586865c96-p78vd\" (UID: \"af381dba-8d03-4a1d-94a5-cd8a45dbc318\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-p78vd" Feb 24 00:18:13 crc kubenswrapper[4824]: I0224 00:18:13.425925 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af381dba-8d03-4a1d-94a5-cd8a45dbc318-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-p78vd\" (UID: \"af381dba-8d03-4a1d-94a5-cd8a45dbc318\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-p78vd" Feb 24 00:18:13 crc kubenswrapper[4824]: I0224 00:18:13.426684 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af381dba-8d03-4a1d-94a5-cd8a45dbc318-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-p78vd\" (UID: \"af381dba-8d03-4a1d-94a5-cd8a45dbc318\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-p78vd" Feb 24 00:18:13 crc kubenswrapper[4824]: I0224 00:18:13.462824 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b2pt\" (UniqueName: \"kubernetes.io/projected/af381dba-8d03-4a1d-94a5-cd8a45dbc318-kube-api-access-7b2pt\") pod \"cert-manager-operator-controller-manager-5586865c96-p78vd\" (UID: \"af381dba-8d03-4a1d-94a5-cd8a45dbc318\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-p78vd" Feb 24 00:18:13 crc kubenswrapper[4824]: I0224 00:18:13.533817 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-p78vd" Feb 24 00:18:13 crc kubenswrapper[4824]: I0224 00:18:13.835004 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-p78vd"] Feb 24 00:18:13 crc kubenswrapper[4824]: W0224 00:18:13.852241 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf381dba_8d03_4a1d_94a5_cd8a45dbc318.slice/crio-36d10d05706634f48ff722062f8ec4e4c41eb1fcfe9f039c0e2e620d7ad648b0 WatchSource:0}: Error finding container 36d10d05706634f48ff722062f8ec4e4c41eb1fcfe9f039c0e2e620d7ad648b0: Status 404 returned error can't find the container with id 36d10d05706634f48ff722062f8ec4e4c41eb1fcfe9f039c0e2e620d7ad648b0 Feb 24 00:18:14 crc kubenswrapper[4824]: I0224 00:18:14.827384 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-p78vd" event={"ID":"af381dba-8d03-4a1d-94a5-cd8a45dbc318","Type":"ContainerStarted","Data":"36d10d05706634f48ff722062f8ec4e4c41eb1fcfe9f039c0e2e620d7ad648b0"} Feb 24 00:18:15 crc kubenswrapper[4824]: I0224 00:18:15.454114 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-frbxc" Feb 24 00:18:17 crc kubenswrapper[4824]: I0224 00:18:17.849096 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-p78vd" event={"ID":"af381dba-8d03-4a1d-94a5-cd8a45dbc318","Type":"ContainerStarted","Data":"06419dbb9409c63c96175fc68d0b4512ecd169b3b09187a7ac9cb013c2a4d0d8"} Feb 24 00:18:17 crc kubenswrapper[4824]: I0224 00:18:17.871826 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-p78vd" podStartSLOduration=1.755610607 podStartE2EDuration="4.871808802s" podCreationTimestamp="2026-02-24 00:18:13 +0000 UTC" firstStartedPulling="2026-02-24 00:18:13.856109677 +0000 UTC m=+757.845734146" lastFinishedPulling="2026-02-24 00:18:16.972307872 +0000 UTC m=+760.961932341" observedRunningTime="2026-02-24 00:18:17.867758738 +0000 UTC m=+761.857383217" watchObservedRunningTime="2026-02-24 00:18:17.871808802 +0000 UTC m=+761.861433271" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.885972 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.887415 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.893333 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.893372 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Feb 24 00:18:20 crc kubenswrapper[4824]: W0224 00:18:20.893590 4824 reflector.go:561] object-"service-telemetry"/"default-dockercfg-ddnnb": failed to list *v1.Secret: secrets "default-dockercfg-ddnnb" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "service-telemetry": no relationship found between node 'crc' and this object Feb 24 00:18:20 crc kubenswrapper[4824]: E0224 00:18:20.893636 4824 reflector.go:158] "Unhandled Error" err="object-\"service-telemetry\"/\"default-dockercfg-ddnnb\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-ddnnb\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"service-telemetry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.894835 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.895439 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.895849 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Feb 24 00:18:20 crc kubenswrapper[4824]: W0224 00:18:20.895881 4824 reflector.go:561] object-"service-telemetry"/"elasticsearch-es-scripts": failed to list *v1.ConfigMap: configmaps "elasticsearch-es-scripts" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "service-telemetry": no relationship found between node 'crc' and this object Feb 24 00:18:20 crc kubenswrapper[4824]: E0224 00:18:20.895932 4824 reflector.go:158] "Unhandled Error" err="object-\"service-telemetry\"/\"elasticsearch-es-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"elasticsearch-es-scripts\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"service-telemetry\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.896512 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.901153 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.915015 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.944854 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/96f9c835-f7c9-4774-9b95-8911ab4ffb23-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.944924 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.944954 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.944979 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.945010 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.945030 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/96f9c835-f7c9-4774-9b95-8911ab4ffb23-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.945052 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.945077 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.945103 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.945137 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.945160 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.945187 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.945214 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.945242 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:20 crc kubenswrapper[4824]: I0224 00:18:20.945268 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.046577 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/96f9c835-f7c9-4774-9b95-8911ab4ffb23-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.046657 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.046686 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.046708 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.046732 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.046753 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/96f9c835-f7c9-4774-9b95-8911ab4ffb23-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.046772 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.046798 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.046821 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.046850 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.046871 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.046894 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.046922 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.046949 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.046973 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.047825 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.048530 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/96f9c835-f7c9-4774-9b95-8911ab4ffb23-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.050128 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.050218 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.050656 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.051357 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.051671 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.055703 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/96f9c835-f7c9-4774-9b95-8911ab4ffb23-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.055765 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.056043 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.056161 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.056281 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.056755 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.058343 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.698617 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-m8rqb"] Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.699646 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-m8rqb" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.705834 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.706012 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.706145 4824 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-vsvqw" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.718947 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-m8rqb"] Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.781679 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad293038-bf1d-4800-bd32-9488c5f19e95-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-m8rqb\" (UID: \"ad293038-bf1d-4800-bd32-9488c5f19e95\") " pod="cert-manager/cert-manager-webhook-6888856db4-m8rqb" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.782197 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwd8b\" (UniqueName: \"kubernetes.io/projected/ad293038-bf1d-4800-bd32-9488c5f19e95-kube-api-access-jwd8b\") pod \"cert-manager-webhook-6888856db4-m8rqb\" (UID: \"ad293038-bf1d-4800-bd32-9488c5f19e95\") " pod="cert-manager/cert-manager-webhook-6888856db4-m8rqb" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.883331 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad293038-bf1d-4800-bd32-9488c5f19e95-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-m8rqb\" (UID: \"ad293038-bf1d-4800-bd32-9488c5f19e95\") " pod="cert-manager/cert-manager-webhook-6888856db4-m8rqb" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.883436 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwd8b\" (UniqueName: \"kubernetes.io/projected/ad293038-bf1d-4800-bd32-9488c5f19e95-kube-api-access-jwd8b\") pod \"cert-manager-webhook-6888856db4-m8rqb\" (UID: \"ad293038-bf1d-4800-bd32-9488c5f19e95\") " pod="cert-manager/cert-manager-webhook-6888856db4-m8rqb" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.919565 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-ddnnb" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.922143 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ad293038-bf1d-4800-bd32-9488c5f19e95-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-m8rqb\" (UID: \"ad293038-bf1d-4800-bd32-9488c5f19e95\") " pod="cert-manager/cert-manager-webhook-6888856db4-m8rqb" Feb 24 00:18:21 crc kubenswrapper[4824]: I0224 00:18:21.922903 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwd8b\" (UniqueName: \"kubernetes.io/projected/ad293038-bf1d-4800-bd32-9488c5f19e95-kube-api-access-jwd8b\") pod \"cert-manager-webhook-6888856db4-m8rqb\" (UID: \"ad293038-bf1d-4800-bd32-9488c5f19e95\") " pod="cert-manager/cert-manager-webhook-6888856db4-m8rqb" Feb 24 00:18:22 crc kubenswrapper[4824]: I0224 00:18:22.013754 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-m8rqb" Feb 24 00:18:22 crc kubenswrapper[4824]: E0224 00:18:22.050365 4824 configmap.go:193] Couldn't get configMap service-telemetry/elasticsearch-es-scripts: failed to sync configmap cache: timed out waiting for the condition Feb 24 00:18:22 crc kubenswrapper[4824]: E0224 00:18:22.050459 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-scripts podName:96f9c835-f7c9-4774-9b95-8911ab4ffb23 nodeName:}" failed. No retries permitted until 2026-02-24 00:18:22.550441289 +0000 UTC m=+766.540065758 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "elastic-internal-scripts" (UniqueName: "kubernetes.io/configmap/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-scripts") pod "elasticsearch-es-default-0" (UID: "96f9c835-f7c9-4774-9b95-8911ab4ffb23") : failed to sync configmap cache: timed out waiting for the condition Feb 24 00:18:22 crc kubenswrapper[4824]: I0224 00:18:22.448790 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Feb 24 00:18:22 crc kubenswrapper[4824]: I0224 00:18:22.528752 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-m8rqb"] Feb 24 00:18:22 crc kubenswrapper[4824]: W0224 00:18:22.565113 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad293038_bf1d_4800_bd32_9488c5f19e95.slice/crio-bdb463b23ac5ac9e9024eb9aef4baee7f3b4e4f17c968fa5d992aeb24602eeb8 WatchSource:0}: Error finding container bdb463b23ac5ac9e9024eb9aef4baee7f3b4e4f17c968fa5d992aeb24602eeb8: Status 404 returned error can't find the container with id bdb463b23ac5ac9e9024eb9aef4baee7f3b4e4f17c968fa5d992aeb24602eeb8 Feb 24 00:18:22 crc kubenswrapper[4824]: I0224 00:18:22.593915 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:22 crc kubenswrapper[4824]: I0224 00:18:22.595581 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/96f9c835-f7c9-4774-9b95-8911ab4ffb23-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"96f9c835-f7c9-4774-9b95-8911ab4ffb23\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:22 crc kubenswrapper[4824]: I0224 00:18:22.706381 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:22 crc kubenswrapper[4824]: I0224 00:18:22.886495 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-m8rqb" event={"ID":"ad293038-bf1d-4800-bd32-9488c5f19e95","Type":"ContainerStarted","Data":"bdb463b23ac5ac9e9024eb9aef4baee7f3b4e4f17c968fa5d992aeb24602eeb8"} Feb 24 00:18:23 crc kubenswrapper[4824]: I0224 00:18:23.256310 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 24 00:18:23 crc kubenswrapper[4824]: W0224 00:18:23.272152 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96f9c835_f7c9_4774_9b95_8911ab4ffb23.slice/crio-a53751883953d3360125344cc83cacee9f03420695f0b2faf50f55b76b645c31 WatchSource:0}: Error finding container a53751883953d3360125344cc83cacee9f03420695f0b2faf50f55b76b645c31: Status 404 returned error can't find the container with id a53751883953d3360125344cc83cacee9f03420695f0b2faf50f55b76b645c31 Feb 24 00:18:23 crc kubenswrapper[4824]: I0224 00:18:23.896381 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"96f9c835-f7c9-4774-9b95-8911ab4ffb23","Type":"ContainerStarted","Data":"a53751883953d3360125344cc83cacee9f03420695f0b2faf50f55b76b645c31"} Feb 24 00:18:24 crc kubenswrapper[4824]: I0224 00:18:24.154370 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-qhzcr"] Feb 24 00:18:24 crc kubenswrapper[4824]: I0224 00:18:24.155819 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-qhzcr" Feb 24 00:18:24 crc kubenswrapper[4824]: I0224 00:18:24.173456 4824 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hwctf" Feb 24 00:18:24 crc kubenswrapper[4824]: I0224 00:18:24.179238 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-qhzcr"] Feb 24 00:18:24 crc kubenswrapper[4824]: I0224 00:18:24.225847 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/219daf0d-f400-4a2c-8374-5c23e10c27a6-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-qhzcr\" (UID: \"219daf0d-f400-4a2c-8374-5c23e10c27a6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qhzcr" Feb 24 00:18:24 crc kubenswrapper[4824]: I0224 00:18:24.225937 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbkvb\" (UniqueName: \"kubernetes.io/projected/219daf0d-f400-4a2c-8374-5c23e10c27a6-kube-api-access-xbkvb\") pod \"cert-manager-cainjector-5545bd876-qhzcr\" (UID: \"219daf0d-f400-4a2c-8374-5c23e10c27a6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qhzcr" Feb 24 00:18:24 crc kubenswrapper[4824]: I0224 00:18:24.326804 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbkvb\" (UniqueName: \"kubernetes.io/projected/219daf0d-f400-4a2c-8374-5c23e10c27a6-kube-api-access-xbkvb\") pod \"cert-manager-cainjector-5545bd876-qhzcr\" (UID: \"219daf0d-f400-4a2c-8374-5c23e10c27a6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qhzcr" Feb 24 00:18:24 crc kubenswrapper[4824]: I0224 00:18:24.326967 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/219daf0d-f400-4a2c-8374-5c23e10c27a6-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-qhzcr\" (UID: \"219daf0d-f400-4a2c-8374-5c23e10c27a6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qhzcr" Feb 24 00:18:24 crc kubenswrapper[4824]: I0224 00:18:24.346556 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbkvb\" (UniqueName: \"kubernetes.io/projected/219daf0d-f400-4a2c-8374-5c23e10c27a6-kube-api-access-xbkvb\") pod \"cert-manager-cainjector-5545bd876-qhzcr\" (UID: \"219daf0d-f400-4a2c-8374-5c23e10c27a6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qhzcr" Feb 24 00:18:24 crc kubenswrapper[4824]: I0224 00:18:24.346980 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/219daf0d-f400-4a2c-8374-5c23e10c27a6-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-qhzcr\" (UID: \"219daf0d-f400-4a2c-8374-5c23e10c27a6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-qhzcr" Feb 24 00:18:24 crc kubenswrapper[4824]: I0224 00:18:24.496501 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-qhzcr" Feb 24 00:18:25 crc kubenswrapper[4824]: I0224 00:18:25.279709 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-qhzcr"] Feb 24 00:18:25 crc kubenswrapper[4824]: W0224 00:18:25.292831 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod219daf0d_f400_4a2c_8374_5c23e10c27a6.slice/crio-111e6a81e36b2380ac3e3ce8cd8839b9ae71c986b50bc5d40bdd120bb29d0cca WatchSource:0}: Error finding container 111e6a81e36b2380ac3e3ce8cd8839b9ae71c986b50bc5d40bdd120bb29d0cca: Status 404 returned error can't find the container with id 111e6a81e36b2380ac3e3ce8cd8839b9ae71c986b50bc5d40bdd120bb29d0cca Feb 24 00:18:25 crc kubenswrapper[4824]: I0224 00:18:25.935501 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-qhzcr" event={"ID":"219daf0d-f400-4a2c-8374-5c23e10c27a6","Type":"ContainerStarted","Data":"111e6a81e36b2380ac3e3ce8cd8839b9ae71c986b50bc5d40bdd120bb29d0cca"} Feb 24 00:18:31 crc kubenswrapper[4824]: I0224 00:18:31.953276 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-9mpql"] Feb 24 00:18:31 crc kubenswrapper[4824]: I0224 00:18:31.954788 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-9mpql" Feb 24 00:18:31 crc kubenswrapper[4824]: I0224 00:18:31.959350 4824 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-t6lmx" Feb 24 00:18:31 crc kubenswrapper[4824]: I0224 00:18:31.963548 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-9mpql"] Feb 24 00:18:32 crc kubenswrapper[4824]: I0224 00:18:32.066553 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm8nb\" (UniqueName: \"kubernetes.io/projected/1f370348-c40e-4096-98c1-d681f34b8659-kube-api-access-cm8nb\") pod \"cert-manager-545d4d4674-9mpql\" (UID: \"1f370348-c40e-4096-98c1-d681f34b8659\") " pod="cert-manager/cert-manager-545d4d4674-9mpql" Feb 24 00:18:32 crc kubenswrapper[4824]: I0224 00:18:32.066609 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f370348-c40e-4096-98c1-d681f34b8659-bound-sa-token\") pod \"cert-manager-545d4d4674-9mpql\" (UID: \"1f370348-c40e-4096-98c1-d681f34b8659\") " pod="cert-manager/cert-manager-545d4d4674-9mpql" Feb 24 00:18:32 crc kubenswrapper[4824]: I0224 00:18:32.167495 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f370348-c40e-4096-98c1-d681f34b8659-bound-sa-token\") pod \"cert-manager-545d4d4674-9mpql\" (UID: \"1f370348-c40e-4096-98c1-d681f34b8659\") " pod="cert-manager/cert-manager-545d4d4674-9mpql" Feb 24 00:18:32 crc kubenswrapper[4824]: I0224 00:18:32.167606 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm8nb\" (UniqueName: \"kubernetes.io/projected/1f370348-c40e-4096-98c1-d681f34b8659-kube-api-access-cm8nb\") pod \"cert-manager-545d4d4674-9mpql\" (UID: \"1f370348-c40e-4096-98c1-d681f34b8659\") " pod="cert-manager/cert-manager-545d4d4674-9mpql" Feb 24 00:18:32 crc kubenswrapper[4824]: I0224 00:18:32.202673 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm8nb\" (UniqueName: \"kubernetes.io/projected/1f370348-c40e-4096-98c1-d681f34b8659-kube-api-access-cm8nb\") pod \"cert-manager-545d4d4674-9mpql\" (UID: \"1f370348-c40e-4096-98c1-d681f34b8659\") " pod="cert-manager/cert-manager-545d4d4674-9mpql" Feb 24 00:18:32 crc kubenswrapper[4824]: I0224 00:18:32.215443 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1f370348-c40e-4096-98c1-d681f34b8659-bound-sa-token\") pod \"cert-manager-545d4d4674-9mpql\" (UID: \"1f370348-c40e-4096-98c1-d681f34b8659\") " pod="cert-manager/cert-manager-545d4d4674-9mpql" Feb 24 00:18:32 crc kubenswrapper[4824]: I0224 00:18:32.271020 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-9mpql" Feb 24 00:18:38 crc kubenswrapper[4824]: E0224 00:18:38.906982 4824 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" Feb 24 00:18:38 crc kubenswrapper[4824]: E0224 00:18:38.908080 4824 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-webhook,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671,Command:[/app/cmd/webhook/webhook],Args:[--dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE) --dynamic-serving-dns-names=cert-manager-webhook,cert-manager-webhook.$(POD_NAMESPACE),cert-manager-webhook.$(POD_NAMESPACE).svc --secure-port=10250 --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:10250,Protocol:TCP,HostIP:,},ContainerPort{Name:healthcheck,HostPort:0,ContainerPort:6080,Protocol:TCP,HostIP:,},ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwd8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-webhook-6888856db4-m8rqb_cert-manager(ad293038-bf1d-4800-bd32-9488c5f19e95): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 00:18:38 crc kubenswrapper[4824]: E0224 00:18:38.910956 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-webhook-6888856db4-m8rqb" podUID="ad293038-bf1d-4800-bd32-9488c5f19e95" Feb 24 00:18:38 crc kubenswrapper[4824]: E0224 00:18:38.911284 4824 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" Feb 24 00:18:38 crc kubenswrapper[4824]: E0224 00:18:38.911671 4824 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-cainjector,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671,Command:[/app/cmd/cainjector/cainjector],Args:[--leader-election-namespace=kube-system --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xbkvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-cainjector-5545bd876-qhzcr_cert-manager(219daf0d-f400-4a2c-8374-5c23e10c27a6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 00:18:38 crc kubenswrapper[4824]: E0224 00:18:38.913938 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-cainjector-5545bd876-qhzcr" podUID="219daf0d-f400-4a2c-8374-5c23e10c27a6" Feb 24 00:18:39 crc kubenswrapper[4824]: E0224 00:18:39.022307 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671\\\"\"" pod="cert-manager/cert-manager-cainjector-5545bd876-qhzcr" podUID="219daf0d-f400-4a2c-8374-5c23e10c27a6" Feb 24 00:18:39 crc kubenswrapper[4824]: E0224 00:18:39.022344 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671\\\"\"" pod="cert-manager/cert-manager-webhook-6888856db4-m8rqb" podUID="ad293038-bf1d-4800-bd32-9488c5f19e95" Feb 24 00:18:39 crc kubenswrapper[4824]: E0224 00:18:39.324866 4824 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43" Feb 24 00:18:39 crc kubenswrapper[4824]: E0224 00:18:39.325216 4824 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:interconnect-operator,Image:registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43,Command:[qdr-operator],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:60000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:qdr-operator,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_QDROUTERD_IMAGE,Value:registry.redhat.io/amq7/amq-interconnect@sha256:31d87473fa684178a694f9ee331d3c80f2653f9533cb65c2a325752166a077e9,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:amq7-interconnect-operator.v1.10.20,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-92shp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod interconnect-operator-5bb49f789d-7d9xq_service-telemetry(125693c0-b095-4b7e-9ce3-b96785d4198e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 00:18:39 crc kubenswrapper[4824]: E0224 00:18:39.326764 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"interconnect-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/interconnect-operator-5bb49f789d-7d9xq" podUID="125693c0-b095-4b7e-9ce3-b96785d4198e" Feb 24 00:18:40 crc kubenswrapper[4824]: E0224 00:18:40.032220 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"interconnect-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43\\\"\"" pod="service-telemetry/interconnect-operator-5bb49f789d-7d9xq" podUID="125693c0-b095-4b7e-9ce3-b96785d4198e" Feb 24 00:18:43 crc kubenswrapper[4824]: W0224 00:18:43.351475 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f370348_c40e_4096_98c1_d681f34b8659.slice/crio-88a05b1c998aacc24d685d63410da86a25088b3385f47b39deddbcb95a3277d8 WatchSource:0}: Error finding container 88a05b1c998aacc24d685d63410da86a25088b3385f47b39deddbcb95a3277d8: Status 404 returned error can't find the container with id 88a05b1c998aacc24d685d63410da86a25088b3385f47b39deddbcb95a3277d8 Feb 24 00:18:43 crc kubenswrapper[4824]: I0224 00:18:43.354889 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-9mpql"] Feb 24 00:18:44 crc kubenswrapper[4824]: I0224 00:18:44.059381 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"96f9c835-f7c9-4774-9b95-8911ab4ffb23","Type":"ContainerStarted","Data":"98348fc60c95f3b72a13eee8026e154755adf6da14a38680e033e1765f7ba472"} Feb 24 00:18:44 crc kubenswrapper[4824]: I0224 00:18:44.063242 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-9mpql" event={"ID":"1f370348-c40e-4096-98c1-d681f34b8659","Type":"ContainerStarted","Data":"88a05b1c998aacc24d685d63410da86a25088b3385f47b39deddbcb95a3277d8"} Feb 24 00:18:44 crc kubenswrapper[4824]: I0224 00:18:44.258033 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 24 00:18:44 crc kubenswrapper[4824]: I0224 00:18:44.313290 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 24 00:18:45 crc kubenswrapper[4824]: I0224 00:18:45.072904 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-9mpql" event={"ID":"1f370348-c40e-4096-98c1-d681f34b8659","Type":"ContainerStarted","Data":"33970f108535c8aa9f059b01e3c63ac325765820364921cdfcaf114319016f24"} Feb 24 00:18:45 crc kubenswrapper[4824]: I0224 00:18:45.076785 4824 generic.go:334] "Generic (PLEG): container finished" podID="96f9c835-f7c9-4774-9b95-8911ab4ffb23" containerID="98348fc60c95f3b72a13eee8026e154755adf6da14a38680e033e1765f7ba472" exitCode=0 Feb 24 00:18:45 crc kubenswrapper[4824]: I0224 00:18:45.076883 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"96f9c835-f7c9-4774-9b95-8911ab4ffb23","Type":"ContainerDied","Data":"98348fc60c95f3b72a13eee8026e154755adf6da14a38680e033e1765f7ba472"} Feb 24 00:18:45 crc kubenswrapper[4824]: I0224 00:18:45.099629 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-9mpql" podStartSLOduration=12.968534443 podStartE2EDuration="14.099602862s" podCreationTimestamp="2026-02-24 00:18:31 +0000 UTC" firstStartedPulling="2026-02-24 00:18:43.354566619 +0000 UTC m=+787.344191088" lastFinishedPulling="2026-02-24 00:18:44.485635038 +0000 UTC m=+788.475259507" observedRunningTime="2026-02-24 00:18:45.093260163 +0000 UTC m=+789.082884652" watchObservedRunningTime="2026-02-24 00:18:45.099602862 +0000 UTC m=+789.089227331" Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.085295 4824 generic.go:334] "Generic (PLEG): container finished" podID="96f9c835-f7c9-4774-9b95-8911ab4ffb23" containerID="42e1a62a5519ed4aad2873a43d577e42a2ffc35c648851add4f4846bd1dfb329" exitCode=0 Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.087732 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"96f9c835-f7c9-4774-9b95-8911ab4ffb23","Type":"ContainerDied","Data":"42e1a62a5519ed4aad2873a43d577e42a2ffc35c648851add4f4846bd1dfb329"} Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.813904 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.815250 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.818294 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-gqxxs" Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.818412 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.819962 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.827259 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.829979 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.994785 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d647\" (UniqueName: \"kubernetes.io/projected/b1b50786-48f9-4f1a-bf8b-4686f9baae85-kube-api-access-8d647\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.994857 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b1b50786-48f9-4f1a-bf8b-4686f9baae85-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.994883 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.995040 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.995140 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/b1b50786-48f9-4f1a-bf8b-4686f9baae85-builder-dockercfg-gqxxs-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.995213 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b1b50786-48f9-4f1a-bf8b-4686f9baae85-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.995265 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.995306 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/b1b50786-48f9-4f1a-bf8b-4686f9baae85-builder-dockercfg-gqxxs-push\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.995355 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.995470 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.995508 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:46 crc kubenswrapper[4824]: I0224 00:18:46.995691 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.095746 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"96f9c835-f7c9-4774-9b95-8911ab4ffb23","Type":"ContainerStarted","Data":"8b87192a94b84755847233b1f93d248ab45a5f5eb0ed3c88fbac47f8bf0fdb77"} Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.096037 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.096488 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.096581 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/b1b50786-48f9-4f1a-bf8b-4686f9baae85-builder-dockercfg-gqxxs-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.096618 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b1b50786-48f9-4f1a-bf8b-4686f9baae85-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.096643 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.096669 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/b1b50786-48f9-4f1a-bf8b-4686f9baae85-builder-dockercfg-gqxxs-push\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.096693 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.096753 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.096800 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.096832 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.096871 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d647\" (UniqueName: \"kubernetes.io/projected/b1b50786-48f9-4f1a-bf8b-4686f9baae85-kube-api-access-8d647\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.096900 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b1b50786-48f9-4f1a-bf8b-4686f9baae85-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.096920 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.097884 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.098217 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.098452 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.098467 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.098740 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.098828 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b1b50786-48f9-4f1a-bf8b-4686f9baae85-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.098828 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b1b50786-48f9-4f1a-bf8b-4686f9baae85-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.098905 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.099255 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.108037 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/b1b50786-48f9-4f1a-bf8b-4686f9baae85-builder-dockercfg-gqxxs-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.120925 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d647\" (UniqueName: \"kubernetes.io/projected/b1b50786-48f9-4f1a-bf8b-4686f9baae85-kube-api-access-8d647\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.124061 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/b1b50786-48f9-4f1a-bf8b-4686f9baae85-builder-dockercfg-gqxxs-push\") pod \"service-telemetry-operator-1-build\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.133834 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.140327 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=7.11710773 podStartE2EDuration="27.140305837s" podCreationTimestamp="2026-02-24 00:18:20 +0000 UTC" firstStartedPulling="2026-02-24 00:18:23.282062947 +0000 UTC m=+767.271687426" lastFinishedPulling="2026-02-24 00:18:43.305261034 +0000 UTC m=+787.294885533" observedRunningTime="2026-02-24 00:18:47.139839125 +0000 UTC m=+791.129463594" watchObservedRunningTime="2026-02-24 00:18:47.140305837 +0000 UTC m=+791.129930306" Feb 24 00:18:47 crc kubenswrapper[4824]: I0224 00:18:47.457039 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 24 00:18:47 crc kubenswrapper[4824]: W0224 00:18:47.463096 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1b50786_48f9_4f1a_bf8b_4686f9baae85.slice/crio-d08a403612b11c8bf91f5e3516b788a4c4942b41c820856155d377d7e8361b75 WatchSource:0}: Error finding container d08a403612b11c8bf91f5e3516b788a4c4942b41c820856155d377d7e8361b75: Status 404 returned error can't find the container with id d08a403612b11c8bf91f5e3516b788a4c4942b41c820856155d377d7e8361b75 Feb 24 00:18:48 crc kubenswrapper[4824]: I0224 00:18:48.104798 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"b1b50786-48f9-4f1a-bf8b-4686f9baae85","Type":"ContainerStarted","Data":"d08a403612b11c8bf91f5e3516b788a4c4942b41c820856155d377d7e8361b75"} Feb 24 00:18:53 crc kubenswrapper[4824]: I0224 00:18:53.275742 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:18:53 crc kubenswrapper[4824]: I0224 00:18:53.276635 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:18:54 crc kubenswrapper[4824]: I0224 00:18:54.168146 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-7d9xq" event={"ID":"125693c0-b095-4b7e-9ce3-b96785d4198e","Type":"ContainerStarted","Data":"6524e4c3b507642b50a96cffbe386804f5b8232b06ea6f927c77136198a8a496"} Feb 24 00:18:54 crc kubenswrapper[4824]: I0224 00:18:54.169371 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-qhzcr" event={"ID":"219daf0d-f400-4a2c-8374-5c23e10c27a6","Type":"ContainerStarted","Data":"871c5880f89e2ca8dce6b4a9fb09ca48dac5394160322047f990e07ba8448b18"} Feb 24 00:18:54 crc kubenswrapper[4824]: I0224 00:18:54.171821 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-m8rqb" event={"ID":"ad293038-bf1d-4800-bd32-9488c5f19e95","Type":"ContainerStarted","Data":"a7ff395be8da0def82507a88e46b01ea6cba5773d09ca6fe2c05f85b04310f1d"} Feb 24 00:18:54 crc kubenswrapper[4824]: I0224 00:18:54.172003 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-m8rqb" Feb 24 00:18:54 crc kubenswrapper[4824]: I0224 00:18:54.173773 4824 generic.go:334] "Generic (PLEG): container finished" podID="b1b50786-48f9-4f1a-bf8b-4686f9baae85" containerID="ab94cc45a4691d159d5d48b21f724973db63d5bd87b6d0a0e4494040c5bc84b1" exitCode=0 Feb 24 00:18:54 crc kubenswrapper[4824]: I0224 00:18:54.173805 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"b1b50786-48f9-4f1a-bf8b-4686f9baae85","Type":"ContainerDied","Data":"ab94cc45a4691d159d5d48b21f724973db63d5bd87b6d0a0e4494040c5bc84b1"} Feb 24 00:18:54 crc kubenswrapper[4824]: I0224 00:18:54.257310 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-7d9xq" podStartSLOduration=5.370088689 podStartE2EDuration="52.257277208s" podCreationTimestamp="2026-02-24 00:18:02 +0000 UTC" firstStartedPulling="2026-02-24 00:18:05.930014383 +0000 UTC m=+749.919638852" lastFinishedPulling="2026-02-24 00:18:52.817202882 +0000 UTC m=+796.806827371" observedRunningTime="2026-02-24 00:18:54.196931385 +0000 UTC m=+798.186555874" watchObservedRunningTime="2026-02-24 00:18:54.257277208 +0000 UTC m=+798.246901677" Feb 24 00:18:54 crc kubenswrapper[4824]: I0224 00:18:54.274653 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-qhzcr" podStartSLOduration=-9223372006.580164 podStartE2EDuration="30.274612103s" podCreationTimestamp="2026-02-24 00:18:24 +0000 UTC" firstStartedPulling="2026-02-24 00:18:25.306249059 +0000 UTC m=+769.295873528" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:18:54.272554052 +0000 UTC m=+798.262178541" watchObservedRunningTime="2026-02-24 00:18:54.274612103 +0000 UTC m=+798.264236572" Feb 24 00:18:54 crc kubenswrapper[4824]: I0224 00:18:54.304738 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-m8rqb" podStartSLOduration=-9223372003.550068 podStartE2EDuration="33.304708838s" podCreationTimestamp="2026-02-24 00:18:21 +0000 UTC" firstStartedPulling="2026-02-24 00:18:22.580681165 +0000 UTC m=+766.570305624" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:18:54.303778564 +0000 UTC m=+798.293403043" watchObservedRunningTime="2026-02-24 00:18:54.304708838 +0000 UTC m=+798.294333327" Feb 24 00:18:55 crc kubenswrapper[4824]: I0224 00:18:55.183563 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"b1b50786-48f9-4f1a-bf8b-4686f9baae85","Type":"ContainerStarted","Data":"8766c6c73cf28ccc576c83f82e20c6fe70a6664d9a7a04a13302eaacbca67f41"} Feb 24 00:18:55 crc kubenswrapper[4824]: I0224 00:18:55.215813 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-1-build" podStartSLOduration=3.863134537 podStartE2EDuration="9.215787982s" podCreationTimestamp="2026-02-24 00:18:46 +0000 UTC" firstStartedPulling="2026-02-24 00:18:47.492557529 +0000 UTC m=+791.482182008" lastFinishedPulling="2026-02-24 00:18:52.845210984 +0000 UTC m=+796.834835453" observedRunningTime="2026-02-24 00:18:55.210284274 +0000 UTC m=+799.199908753" watchObservedRunningTime="2026-02-24 00:18:55.215787982 +0000 UTC m=+799.205412451" Feb 24 00:18:57 crc kubenswrapper[4824]: I0224 00:18:57.233544 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 24 00:18:57 crc kubenswrapper[4824]: I0224 00:18:57.234227 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="b1b50786-48f9-4f1a-bf8b-4686f9baae85" containerName="docker-build" containerID="cri-o://8766c6c73cf28ccc576c83f82e20c6fe70a6664d9a7a04a13302eaacbca67f41" gracePeriod=30 Feb 24 00:18:57 crc kubenswrapper[4824]: I0224 00:18:57.801560 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="96f9c835-f7c9-4774-9b95-8911ab4ffb23" containerName="elasticsearch" probeResult="failure" output=< Feb 24 00:18:57 crc kubenswrapper[4824]: {"timestamp": "2026-02-24T00:18:57+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 24 00:18:57 crc kubenswrapper[4824]: > Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.131895 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.133431 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.138304 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.138451 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.138655 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.155111 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.222457 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/cc20437c-c977-4543-a681-cda1af5c3583-builder-dockercfg-gqxxs-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.222573 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.222777 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.222888 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc20437c-c977-4543-a681-cda1af5c3583-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.222945 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.223084 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc20437c-c977-4543-a681-cda1af5c3583-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.223168 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc20437c-c977-4543-a681-cda1af5c3583-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.223242 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r7jd\" (UniqueName: \"kubernetes.io/projected/cc20437c-c977-4543-a681-cda1af5c3583-kube-api-access-2r7jd\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.223305 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc20437c-c977-4543-a681-cda1af5c3583-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.223387 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc20437c-c977-4543-a681-cda1af5c3583-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.223433 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/cc20437c-c977-4543-a681-cda1af5c3583-builder-dockercfg-gqxxs-push\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.223462 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.325570 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.326020 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc20437c-c977-4543-a681-cda1af5c3583-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.326130 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.326151 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.326324 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc20437c-c977-4543-a681-cda1af5c3583-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.326416 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc20437c-c977-4543-a681-cda1af5c3583-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.326501 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r7jd\" (UniqueName: \"kubernetes.io/projected/cc20437c-c977-4543-a681-cda1af5c3583-kube-api-access-2r7jd\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.326635 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc20437c-c977-4543-a681-cda1af5c3583-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.326726 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc20437c-c977-4543-a681-cda1af5c3583-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.326804 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/cc20437c-c977-4543-a681-cda1af5c3583-builder-dockercfg-gqxxs-push\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.326884 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.326969 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/cc20437c-c977-4543-a681-cda1af5c3583-builder-dockercfg-gqxxs-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.327054 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.326886 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc20437c-c977-4543-a681-cda1af5c3583-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.326792 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc20437c-c977-4543-a681-cda1af5c3583-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.326804 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.327264 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc20437c-c977-4543-a681-cda1af5c3583-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.327288 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.327305 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc20437c-c977-4543-a681-cda1af5c3583-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.327825 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.328018 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc20437c-c977-4543-a681-cda1af5c3583-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.346101 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/cc20437c-c977-4543-a681-cda1af5c3583-builder-dockercfg-gqxxs-push\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.346105 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/cc20437c-c977-4543-a681-cda1af5c3583-builder-dockercfg-gqxxs-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.350865 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r7jd\" (UniqueName: \"kubernetes.io/projected/cc20437c-c977-4543-a681-cda1af5c3583-kube-api-access-2r7jd\") pod \"service-telemetry-operator-2-build\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:18:59 crc kubenswrapper[4824]: I0224 00:18:59.456752 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:19:00 crc kubenswrapper[4824]: I0224 00:19:00.637981 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 24 00:19:00 crc kubenswrapper[4824]: W0224 00:19:00.651640 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc20437c_c977_4543_a681_cda1af5c3583.slice/crio-fd9408fdeee02b30a3fa42083cf9391e2cb143354c82385b1e3d293e48b613e4 WatchSource:0}: Error finding container fd9408fdeee02b30a3fa42083cf9391e2cb143354c82385b1e3d293e48b613e4: Status 404 returned error can't find the container with id fd9408fdeee02b30a3fa42083cf9391e2cb143354c82385b1e3d293e48b613e4 Feb 24 00:19:01 crc kubenswrapper[4824]: I0224 00:19:01.232663 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"cc20437c-c977-4543-a681-cda1af5c3583","Type":"ContainerStarted","Data":"fd9408fdeee02b30a3fa42083cf9391e2cb143354c82385b1e3d293e48b613e4"} Feb 24 00:19:02 crc kubenswrapper[4824]: I0224 00:19:02.017536 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-m8rqb" Feb 24 00:19:02 crc kubenswrapper[4824]: I0224 00:19:02.771938 4824 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="96f9c835-f7c9-4774-9b95-8911ab4ffb23" containerName="elasticsearch" probeResult="failure" output=< Feb 24 00:19:02 crc kubenswrapper[4824]: {"timestamp": "2026-02-24T00:19:02+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 24 00:19:02 crc kubenswrapper[4824]: > Feb 24 00:19:03 crc kubenswrapper[4824]: I0224 00:19:03.948246 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_b1b50786-48f9-4f1a-bf8b-4686f9baae85/docker-build/0.log" Feb 24 00:19:03 crc kubenswrapper[4824]: I0224 00:19:03.949642 4824 generic.go:334] "Generic (PLEG): container finished" podID="b1b50786-48f9-4f1a-bf8b-4686f9baae85" containerID="8766c6c73cf28ccc576c83f82e20c6fe70a6664d9a7a04a13302eaacbca67f41" exitCode=1 Feb 24 00:19:03 crc kubenswrapper[4824]: I0224 00:19:03.949719 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"b1b50786-48f9-4f1a-bf8b-4686f9baae85","Type":"ContainerDied","Data":"8766c6c73cf28ccc576c83f82e20c6fe70a6664d9a7a04a13302eaacbca67f41"} Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.084246 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_b1b50786-48f9-4f1a-bf8b-4686f9baae85/docker-build/0.log" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.084942 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.107661 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-container-storage-root\") pod \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.107718 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-container-storage-run\") pod \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.107795 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/b1b50786-48f9-4f1a-bf8b-4686f9baae85-builder-dockercfg-gqxxs-pull\") pod \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.107829 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-ca-bundles\") pod \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.107876 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d647\" (UniqueName: \"kubernetes.io/projected/b1b50786-48f9-4f1a-bf8b-4686f9baae85-kube-api-access-8d647\") pod \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.107927 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b1b50786-48f9-4f1a-bf8b-4686f9baae85-buildcachedir\") pod \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.108046 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-system-configs\") pod \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.108108 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/b1b50786-48f9-4f1a-bf8b-4686f9baae85-builder-dockercfg-gqxxs-push\") pod \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.108155 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b1b50786-48f9-4f1a-bf8b-4686f9baae85-node-pullsecrets\") pod \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.108191 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-buildworkdir\") pod \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.108264 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-proxy-ca-bundles\") pod \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.108307 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-blob-cache\") pod \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\" (UID: \"b1b50786-48f9-4f1a-bf8b-4686f9baae85\") " Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.109617 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b1b50786-48f9-4f1a-bf8b-4686f9baae85" (UID: "b1b50786-48f9-4f1a-bf8b-4686f9baae85"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.109687 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b50786-48f9-4f1a-bf8b-4686f9baae85-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b1b50786-48f9-4f1a-bf8b-4686f9baae85" (UID: "b1b50786-48f9-4f1a-bf8b-4686f9baae85"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.109820 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b50786-48f9-4f1a-bf8b-4686f9baae85-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b1b50786-48f9-4f1a-bf8b-4686f9baae85" (UID: "b1b50786-48f9-4f1a-bf8b-4686f9baae85"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.110759 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b1b50786-48f9-4f1a-bf8b-4686f9baae85" (UID: "b1b50786-48f9-4f1a-bf8b-4686f9baae85"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.111136 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b1b50786-48f9-4f1a-bf8b-4686f9baae85" (UID: "b1b50786-48f9-4f1a-bf8b-4686f9baae85"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.111159 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b1b50786-48f9-4f1a-bf8b-4686f9baae85" (UID: "b1b50786-48f9-4f1a-bf8b-4686f9baae85"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.112088 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b1b50786-48f9-4f1a-bf8b-4686f9baae85" (UID: "b1b50786-48f9-4f1a-bf8b-4686f9baae85"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.112368 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b1b50786-48f9-4f1a-bf8b-4686f9baae85" (UID: "b1b50786-48f9-4f1a-bf8b-4686f9baae85"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.112379 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b1b50786-48f9-4f1a-bf8b-4686f9baae85" (UID: "b1b50786-48f9-4f1a-bf8b-4686f9baae85"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.118757 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b50786-48f9-4f1a-bf8b-4686f9baae85-builder-dockercfg-gqxxs-push" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-push") pod "b1b50786-48f9-4f1a-bf8b-4686f9baae85" (UID: "b1b50786-48f9-4f1a-bf8b-4686f9baae85"). InnerVolumeSpecName "builder-dockercfg-gqxxs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.119760 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b50786-48f9-4f1a-bf8b-4686f9baae85-builder-dockercfg-gqxxs-pull" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-pull") pod "b1b50786-48f9-4f1a-bf8b-4686f9baae85" (UID: "b1b50786-48f9-4f1a-bf8b-4686f9baae85"). InnerVolumeSpecName "builder-dockercfg-gqxxs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.122745 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b50786-48f9-4f1a-bf8b-4686f9baae85-kube-api-access-8d647" (OuterVolumeSpecName: "kube-api-access-8d647") pod "b1b50786-48f9-4f1a-bf8b-4686f9baae85" (UID: "b1b50786-48f9-4f1a-bf8b-4686f9baae85"). InnerVolumeSpecName "kube-api-access-8d647". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.209970 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.210022 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.210040 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/b1b50786-48f9-4f1a-bf8b-4686f9baae85-builder-dockercfg-gqxxs-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.210057 4824 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.210070 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d647\" (UniqueName: \"kubernetes.io/projected/b1b50786-48f9-4f1a-bf8b-4686f9baae85-kube-api-access-8d647\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.210088 4824 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b1b50786-48f9-4f1a-bf8b-4686f9baae85-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.210102 4824 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.210115 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/b1b50786-48f9-4f1a-bf8b-4686f9baae85-builder-dockercfg-gqxxs-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.210128 4824 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b1b50786-48f9-4f1a-bf8b-4686f9baae85-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.210143 4824 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.210157 4824 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.210213 4824 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b1b50786-48f9-4f1a-bf8b-4686f9baae85-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.958824 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"cc20437c-c977-4543-a681-cda1af5c3583","Type":"ContainerStarted","Data":"57263a4947e6ca13016d819f5f6d5967c292ce21c193f314e05a87993146fc70"} Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.961561 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_b1b50786-48f9-4f1a-bf8b-4686f9baae85/docker-build/0.log" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.962001 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"b1b50786-48f9-4f1a-bf8b-4686f9baae85","Type":"ContainerDied","Data":"d08a403612b11c8bf91f5e3516b788a4c4942b41c820856155d377d7e8361b75"} Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.962059 4824 scope.go:117] "RemoveContainer" containerID="8766c6c73cf28ccc576c83f82e20c6fe70a6664d9a7a04a13302eaacbca67f41" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.962210 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:19:04 crc kubenswrapper[4824]: I0224 00:19:04.986833 4824 scope.go:117] "RemoveContainer" containerID="ab94cc45a4691d159d5d48b21f724973db63d5bd87b6d0a0e4494040c5bc84b1" Feb 24 00:19:05 crc kubenswrapper[4824]: I0224 00:19:05.025136 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 24 00:19:05 crc kubenswrapper[4824]: E0224 00:19:05.046961 4824 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=4548264831213021485, SKID=, AKID=10:68:53:25:D1:3D:2E:E3:54:0D:95:0B:6A:90:F1:BD:A9:4F:27:6E failed: x509: certificate signed by unknown authority" Feb 24 00:19:05 crc kubenswrapper[4824]: I0224 00:19:05.048042 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 24 00:19:06 crc kubenswrapper[4824]: I0224 00:19:06.084918 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 24 00:19:06 crc kubenswrapper[4824]: I0224 00:19:06.700910 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1b50786-48f9-4f1a-bf8b-4686f9baae85" path="/var/lib/kubelet/pods/b1b50786-48f9-4f1a-bf8b-4686f9baae85/volumes" Feb 24 00:19:06 crc kubenswrapper[4824]: I0224 00:19:06.975479 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-2-build" podUID="cc20437c-c977-4543-a681-cda1af5c3583" containerName="git-clone" containerID="cri-o://57263a4947e6ca13016d819f5f6d5967c292ce21c193f314e05a87993146fc70" gracePeriod=30 Feb 24 00:19:07 crc kubenswrapper[4824]: I0224 00:19:07.984705 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_cc20437c-c977-4543-a681-cda1af5c3583/git-clone/0.log" Feb 24 00:19:07 crc kubenswrapper[4824]: I0224 00:19:07.986417 4824 generic.go:334] "Generic (PLEG): container finished" podID="cc20437c-c977-4543-a681-cda1af5c3583" containerID="57263a4947e6ca13016d819f5f6d5967c292ce21c193f314e05a87993146fc70" exitCode=1 Feb 24 00:19:07 crc kubenswrapper[4824]: I0224 00:19:07.986508 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"cc20437c-c977-4543-a681-cda1af5c3583","Type":"ContainerDied","Data":"57263a4947e6ca13016d819f5f6d5967c292ce21c193f314e05a87993146fc70"} Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.211280 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.567004 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_cc20437c-c977-4543-a681-cda1af5c3583/git-clone/0.log" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.567126 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.590910 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc20437c-c977-4543-a681-cda1af5c3583-build-proxy-ca-bundles\") pod \"cc20437c-c977-4543-a681-cda1af5c3583\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.590991 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/cc20437c-c977-4543-a681-cda1af5c3583-builder-dockercfg-gqxxs-pull\") pod \"cc20437c-c977-4543-a681-cda1af5c3583\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.591079 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc20437c-c977-4543-a681-cda1af5c3583-build-system-configs\") pod \"cc20437c-c977-4543-a681-cda1af5c3583\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.591117 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc20437c-c977-4543-a681-cda1af5c3583-node-pullsecrets\") pod \"cc20437c-c977-4543-a681-cda1af5c3583\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.591150 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-container-storage-run\") pod \"cc20437c-c977-4543-a681-cda1af5c3583\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.591183 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r7jd\" (UniqueName: \"kubernetes.io/projected/cc20437c-c977-4543-a681-cda1af5c3583-kube-api-access-2r7jd\") pod \"cc20437c-c977-4543-a681-cda1af5c3583\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.591204 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/cc20437c-c977-4543-a681-cda1af5c3583-builder-dockercfg-gqxxs-push\") pod \"cc20437c-c977-4543-a681-cda1af5c3583\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.591236 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc20437c-c977-4543-a681-cda1af5c3583-buildcachedir\") pod \"cc20437c-c977-4543-a681-cda1af5c3583\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.591284 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-container-storage-root\") pod \"cc20437c-c977-4543-a681-cda1af5c3583\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.591302 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-buildworkdir\") pod \"cc20437c-c977-4543-a681-cda1af5c3583\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.591354 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc20437c-c977-4543-a681-cda1af5c3583-build-ca-bundles\") pod \"cc20437c-c977-4543-a681-cda1af5c3583\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.591400 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-build-blob-cache\") pod \"cc20437c-c977-4543-a681-cda1af5c3583\" (UID: \"cc20437c-c977-4543-a681-cda1af5c3583\") " Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.591626 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc20437c-c977-4543-a681-cda1af5c3583-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "cc20437c-c977-4543-a681-cda1af5c3583" (UID: "cc20437c-c977-4543-a681-cda1af5c3583"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.591832 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc20437c-c977-4543-a681-cda1af5c3583-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "cc20437c-c977-4543-a681-cda1af5c3583" (UID: "cc20437c-c977-4543-a681-cda1af5c3583"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.591881 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc20437c-c977-4543-a681-cda1af5c3583-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "cc20437c-c977-4543-a681-cda1af5c3583" (UID: "cc20437c-c977-4543-a681-cda1af5c3583"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.591965 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "cc20437c-c977-4543-a681-cda1af5c3583" (UID: "cc20437c-c977-4543-a681-cda1af5c3583"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.592397 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc20437c-c977-4543-a681-cda1af5c3583-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "cc20437c-c977-4543-a681-cda1af5c3583" (UID: "cc20437c-c977-4543-a681-cda1af5c3583"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.592499 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "cc20437c-c977-4543-a681-cda1af5c3583" (UID: "cc20437c-c977-4543-a681-cda1af5c3583"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.592566 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "cc20437c-c977-4543-a681-cda1af5c3583" (UID: "cc20437c-c977-4543-a681-cda1af5c3583"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.592607 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "cc20437c-c977-4543-a681-cda1af5c3583" (UID: "cc20437c-c977-4543-a681-cda1af5c3583"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.592893 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc20437c-c977-4543-a681-cda1af5c3583-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "cc20437c-c977-4543-a681-cda1af5c3583" (UID: "cc20437c-c977-4543-a681-cda1af5c3583"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.600960 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc20437c-c977-4543-a681-cda1af5c3583-builder-dockercfg-gqxxs-pull" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-pull") pod "cc20437c-c977-4543-a681-cda1af5c3583" (UID: "cc20437c-c977-4543-a681-cda1af5c3583"). InnerVolumeSpecName "builder-dockercfg-gqxxs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.601895 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc20437c-c977-4543-a681-cda1af5c3583-kube-api-access-2r7jd" (OuterVolumeSpecName: "kube-api-access-2r7jd") pod "cc20437c-c977-4543-a681-cda1af5c3583" (UID: "cc20437c-c977-4543-a681-cda1af5c3583"). InnerVolumeSpecName "kube-api-access-2r7jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.603214 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc20437c-c977-4543-a681-cda1af5c3583-builder-dockercfg-gqxxs-push" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-push") pod "cc20437c-c977-4543-a681-cda1af5c3583" (UID: "cc20437c-c977-4543-a681-cda1af5c3583"). InnerVolumeSpecName "builder-dockercfg-gqxxs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.694431 4824 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cc20437c-c977-4543-a681-cda1af5c3583-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.694707 4824 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cc20437c-c977-4543-a681-cda1af5c3583-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.694792 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.694808 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r7jd\" (UniqueName: \"kubernetes.io/projected/cc20437c-c977-4543-a681-cda1af5c3583-kube-api-access-2r7jd\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.694820 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/cc20437c-c977-4543-a681-cda1af5c3583-builder-dockercfg-gqxxs-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.694833 4824 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cc20437c-c977-4543-a681-cda1af5c3583-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.694872 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.694881 4824 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.694893 4824 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc20437c-c977-4543-a681-cda1af5c3583-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.694904 4824 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cc20437c-c977-4543-a681-cda1af5c3583-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.694914 4824 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cc20437c-c977-4543-a681-cda1af5c3583-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.694923 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/cc20437c-c977-4543-a681-cda1af5c3583-builder-dockercfg-gqxxs-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.995759 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_cc20437c-c977-4543-a681-cda1af5c3583/git-clone/0.log" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.995832 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"cc20437c-c977-4543-a681-cda1af5c3583","Type":"ContainerDied","Data":"fd9408fdeee02b30a3fa42083cf9391e2cb143354c82385b1e3d293e48b613e4"} Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.995883 4824 scope.go:117] "RemoveContainer" containerID="57263a4947e6ca13016d819f5f6d5967c292ce21c193f314e05a87993146fc70" Feb 24 00:19:08 crc kubenswrapper[4824]: I0224 00:19:08.995925 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:19:09 crc kubenswrapper[4824]: I0224 00:19:09.024717 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 24 00:19:09 crc kubenswrapper[4824]: I0224 00:19:09.032252 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 24 00:19:10 crc kubenswrapper[4824]: I0224 00:19:10.703795 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc20437c-c977-4543-a681-cda1af5c3583" path="/var/lib/kubelet/pods/cc20437c-c977-4543-a681-cda1af5c3583/volumes" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.639914 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Feb 24 00:19:17 crc kubenswrapper[4824]: E0224 00:19:17.642112 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b50786-48f9-4f1a-bf8b-4686f9baae85" containerName="docker-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.642134 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b50786-48f9-4f1a-bf8b-4686f9baae85" containerName="docker-build" Feb 24 00:19:17 crc kubenswrapper[4824]: E0224 00:19:17.642156 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc20437c-c977-4543-a681-cda1af5c3583" containerName="git-clone" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.642165 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc20437c-c977-4543-a681-cda1af5c3583" containerName="git-clone" Feb 24 00:19:17 crc kubenswrapper[4824]: E0224 00:19:17.642178 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b50786-48f9-4f1a-bf8b-4686f9baae85" containerName="manage-dockerfile" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.642185 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b50786-48f9-4f1a-bf8b-4686f9baae85" containerName="manage-dockerfile" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.642295 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc20437c-c977-4543-a681-cda1af5c3583" containerName="git-clone" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.642306 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b50786-48f9-4f1a-bf8b-4686f9baae85" containerName="docker-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.643327 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.648764 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-3-ca" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.649004 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-3-sys-config" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.649076 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-3-global-ca" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.649486 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-gqxxs" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.668011 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.739836 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c95f5c0-186a-4a4b-867d-88660b3edf1f-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.739905 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/6c95f5c0-186a-4a4b-867d-88660b3edf1f-builder-dockercfg-gqxxs-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.739935 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.740079 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.740113 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6c95f5c0-186a-4a4b-867d-88660b3edf1f-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.740153 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.740181 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.740220 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.740248 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.740281 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.740309 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/6c95f5c0-186a-4a4b-867d-88660b3edf1f-builder-dockercfg-gqxxs-push\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.740414 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvn47\" (UniqueName: \"kubernetes.io/projected/6c95f5c0-186a-4a4b-867d-88660b3edf1f-kube-api-access-dvn47\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.841931 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.842046 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.842083 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.842124 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/6c95f5c0-186a-4a4b-867d-88660b3edf1f-builder-dockercfg-gqxxs-push\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.842214 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvn47\" (UniqueName: \"kubernetes.io/projected/6c95f5c0-186a-4a4b-867d-88660b3edf1f-kube-api-access-dvn47\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.842273 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c95f5c0-186a-4a4b-867d-88660b3edf1f-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.842347 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/6c95f5c0-186a-4a4b-867d-88660b3edf1f-builder-dockercfg-gqxxs-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.842404 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.842502 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.842589 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6c95f5c0-186a-4a4b-867d-88660b3edf1f-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.842647 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.842776 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.842799 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6c95f5c0-186a-4a4b-867d-88660b3edf1f-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.842968 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.843071 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.843180 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.843197 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.843431 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.843881 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.843962 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c95f5c0-186a-4a4b-867d-88660b3edf1f-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.844616 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.854277 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/6c95f5c0-186a-4a4b-867d-88660b3edf1f-builder-dockercfg-gqxxs-push\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.854273 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/6c95f5c0-186a-4a4b-867d-88660b3edf1f-builder-dockercfg-gqxxs-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.861328 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvn47\" (UniqueName: \"kubernetes.io/projected/6c95f5c0-186a-4a4b-867d-88660b3edf1f-kube-api-access-dvn47\") pod \"service-telemetry-operator-3-build\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:17 crc kubenswrapper[4824]: I0224 00:19:17.962541 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:18 crc kubenswrapper[4824]: I0224 00:19:18.188604 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Feb 24 00:19:19 crc kubenswrapper[4824]: I0224 00:19:19.063814 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"6c95f5c0-186a-4a4b-867d-88660b3edf1f","Type":"ContainerStarted","Data":"38e58f7cc777ad190c4f6b480752e9102012919dd4b65943269d93402ea90dad"} Feb 24 00:19:19 crc kubenswrapper[4824]: I0224 00:19:19.064293 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"6c95f5c0-186a-4a4b-867d-88660b3edf1f","Type":"ContainerStarted","Data":"d5eb4c33df13b7de6ce3c2cf8562f2a4d64e137c350b97d64bc6f16d1d0fafdc"} Feb 24 00:19:19 crc kubenswrapper[4824]: E0224 00:19:19.129360 4824 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=4548264831213021485, SKID=, AKID=10:68:53:25:D1:3D:2E:E3:54:0D:95:0B:6A:90:F1:BD:A9:4F:27:6E failed: x509: certificate signed by unknown authority" Feb 24 00:19:20 crc kubenswrapper[4824]: I0224 00:19:20.162149 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.078685 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-3-build" podUID="6c95f5c0-186a-4a4b-867d-88660b3edf1f" containerName="git-clone" containerID="cri-o://38e58f7cc777ad190c4f6b480752e9102012919dd4b65943269d93402ea90dad" gracePeriod=30 Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.439232 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_6c95f5c0-186a-4a4b-867d-88660b3edf1f/git-clone/0.log" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.439855 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.501186 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-buildworkdir\") pod \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.501324 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-proxy-ca-bundles\") pod \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.501355 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-system-configs\") pod \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.501504 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/6c95f5c0-186a-4a4b-867d-88660b3edf1f-builder-dockercfg-gqxxs-pull\") pod \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.502960 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-blob-cache\") pod \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.502991 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvn47\" (UniqueName: \"kubernetes.io/projected/6c95f5c0-186a-4a4b-867d-88660b3edf1f-kube-api-access-dvn47\") pod \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.503018 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-container-storage-run\") pod \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.503054 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c95f5c0-186a-4a4b-867d-88660b3edf1f-node-pullsecrets\") pod \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.503081 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-ca-bundles\") pod \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.503115 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-container-storage-root\") pod \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.503201 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/6c95f5c0-186a-4a4b-867d-88660b3edf1f-builder-dockercfg-gqxxs-push\") pod \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.503254 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6c95f5c0-186a-4a4b-867d-88660b3edf1f-buildcachedir\") pod \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\" (UID: \"6c95f5c0-186a-4a4b-867d-88660b3edf1f\") " Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.502374 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "6c95f5c0-186a-4a4b-867d-88660b3edf1f" (UID: "6c95f5c0-186a-4a4b-867d-88660b3edf1f"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.502427 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "6c95f5c0-186a-4a4b-867d-88660b3edf1f" (UID: "6c95f5c0-186a-4a4b-867d-88660b3edf1f"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.502757 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "6c95f5c0-186a-4a4b-867d-88660b3edf1f" (UID: "6c95f5c0-186a-4a4b-867d-88660b3edf1f"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.503154 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c95f5c0-186a-4a4b-867d-88660b3edf1f-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "6c95f5c0-186a-4a4b-867d-88660b3edf1f" (UID: "6c95f5c0-186a-4a4b-867d-88660b3edf1f"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.503327 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "6c95f5c0-186a-4a4b-867d-88660b3edf1f" (UID: "6c95f5c0-186a-4a4b-867d-88660b3edf1f"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.503783 4824 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.504066 4824 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.504088 4824 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.504103 4824 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c95f5c0-186a-4a4b-867d-88660b3edf1f-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.503571 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c95f5c0-186a-4a4b-867d-88660b3edf1f-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "6c95f5c0-186a-4a4b-867d-88660b3edf1f" (UID: "6c95f5c0-186a-4a4b-867d-88660b3edf1f"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.503641 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "6c95f5c0-186a-4a4b-867d-88660b3edf1f" (UID: "6c95f5c0-186a-4a4b-867d-88660b3edf1f"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.503772 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "6c95f5c0-186a-4a4b-867d-88660b3edf1f" (UID: "6c95f5c0-186a-4a4b-867d-88660b3edf1f"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.503915 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "6c95f5c0-186a-4a4b-867d-88660b3edf1f" (UID: "6c95f5c0-186a-4a4b-867d-88660b3edf1f"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.508242 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c95f5c0-186a-4a4b-867d-88660b3edf1f-builder-dockercfg-gqxxs-pull" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-pull") pod "6c95f5c0-186a-4a4b-867d-88660b3edf1f" (UID: "6c95f5c0-186a-4a4b-867d-88660b3edf1f"). InnerVolumeSpecName "builder-dockercfg-gqxxs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.508628 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c95f5c0-186a-4a4b-867d-88660b3edf1f-kube-api-access-dvn47" (OuterVolumeSpecName: "kube-api-access-dvn47") pod "6c95f5c0-186a-4a4b-867d-88660b3edf1f" (UID: "6c95f5c0-186a-4a4b-867d-88660b3edf1f"). InnerVolumeSpecName "kube-api-access-dvn47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.508697 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c95f5c0-186a-4a4b-867d-88660b3edf1f-builder-dockercfg-gqxxs-push" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-push") pod "6c95f5c0-186a-4a4b-867d-88660b3edf1f" (UID: "6c95f5c0-186a-4a4b-867d-88660b3edf1f"). InnerVolumeSpecName "builder-dockercfg-gqxxs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.606158 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.606203 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/6c95f5c0-186a-4a4b-867d-88660b3edf1f-builder-dockercfg-gqxxs-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.606218 4824 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6c95f5c0-186a-4a4b-867d-88660b3edf1f-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.606230 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/6c95f5c0-186a-4a4b-867d-88660b3edf1f-builder-dockercfg-gqxxs-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.606242 4824 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.606254 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6c95f5c0-186a-4a4b-867d-88660b3edf1f-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.606263 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvn47\" (UniqueName: \"kubernetes.io/projected/6c95f5c0-186a-4a4b-867d-88660b3edf1f-kube-api-access-dvn47\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:21 crc kubenswrapper[4824]: I0224 00:19:21.606272 4824 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c95f5c0-186a-4a4b-867d-88660b3edf1f-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:22 crc kubenswrapper[4824]: I0224 00:19:22.087255 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_6c95f5c0-186a-4a4b-867d-88660b3edf1f/git-clone/0.log" Feb 24 00:19:22 crc kubenswrapper[4824]: I0224 00:19:22.087312 4824 generic.go:334] "Generic (PLEG): container finished" podID="6c95f5c0-186a-4a4b-867d-88660b3edf1f" containerID="38e58f7cc777ad190c4f6b480752e9102012919dd4b65943269d93402ea90dad" exitCode=1 Feb 24 00:19:22 crc kubenswrapper[4824]: I0224 00:19:22.087351 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"6c95f5c0-186a-4a4b-867d-88660b3edf1f","Type":"ContainerDied","Data":"38e58f7cc777ad190c4f6b480752e9102012919dd4b65943269d93402ea90dad"} Feb 24 00:19:22 crc kubenswrapper[4824]: I0224 00:19:22.087388 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"6c95f5c0-186a-4a4b-867d-88660b3edf1f","Type":"ContainerDied","Data":"d5eb4c33df13b7de6ce3c2cf8562f2a4d64e137c350b97d64bc6f16d1d0fafdc"} Feb 24 00:19:22 crc kubenswrapper[4824]: I0224 00:19:22.087409 4824 scope.go:117] "RemoveContainer" containerID="38e58f7cc777ad190c4f6b480752e9102012919dd4b65943269d93402ea90dad" Feb 24 00:19:22 crc kubenswrapper[4824]: I0224 00:19:22.087558 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:19:22 crc kubenswrapper[4824]: I0224 00:19:22.115926 4824 scope.go:117] "RemoveContainer" containerID="38e58f7cc777ad190c4f6b480752e9102012919dd4b65943269d93402ea90dad" Feb 24 00:19:22 crc kubenswrapper[4824]: E0224 00:19:22.118719 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38e58f7cc777ad190c4f6b480752e9102012919dd4b65943269d93402ea90dad\": container with ID starting with 38e58f7cc777ad190c4f6b480752e9102012919dd4b65943269d93402ea90dad not found: ID does not exist" containerID="38e58f7cc777ad190c4f6b480752e9102012919dd4b65943269d93402ea90dad" Feb 24 00:19:22 crc kubenswrapper[4824]: I0224 00:19:22.118783 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38e58f7cc777ad190c4f6b480752e9102012919dd4b65943269d93402ea90dad"} err="failed to get container status \"38e58f7cc777ad190c4f6b480752e9102012919dd4b65943269d93402ea90dad\": rpc error: code = NotFound desc = could not find container \"38e58f7cc777ad190c4f6b480752e9102012919dd4b65943269d93402ea90dad\": container with ID starting with 38e58f7cc777ad190c4f6b480752e9102012919dd4b65943269d93402ea90dad not found: ID does not exist" Feb 24 00:19:22 crc kubenswrapper[4824]: I0224 00:19:22.126700 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Feb 24 00:19:22 crc kubenswrapper[4824]: I0224 00:19:22.132138 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Feb 24 00:19:22 crc kubenswrapper[4824]: I0224 00:19:22.706487 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c95f5c0-186a-4a4b-867d-88660b3edf1f" path="/var/lib/kubelet/pods/6c95f5c0-186a-4a4b-867d-88660b3edf1f/volumes" Feb 24 00:19:23 crc kubenswrapper[4824]: I0224 00:19:23.276659 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:19:23 crc kubenswrapper[4824]: I0224 00:19:23.276752 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.536519 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Feb 24 00:19:31 crc kubenswrapper[4824]: E0224 00:19:31.537409 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c95f5c0-186a-4a4b-867d-88660b3edf1f" containerName="git-clone" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.537430 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c95f5c0-186a-4a4b-867d-88660b3edf1f" containerName="git-clone" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.537647 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c95f5c0-186a-4a4b-867d-88660b3edf1f" containerName="git-clone" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.539041 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.541699 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-gqxxs" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.541986 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-4-global-ca" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.542476 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-4-ca" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.544078 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-4-sys-config" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.551663 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.572592 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/cb882313-5084-4bd0-b5aa-25322ecd66ac-builder-dockercfg-gqxxs-push\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.572645 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.572683 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.572711 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.572729 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.572767 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.572788 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7p5t\" (UniqueName: \"kubernetes.io/projected/cb882313-5084-4bd0-b5aa-25322ecd66ac-kube-api-access-p7p5t\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.572813 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.572835 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.572867 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb882313-5084-4bd0-b5aa-25322ecd66ac-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.573124 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cb882313-5084-4bd0-b5aa-25322ecd66ac-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.573229 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/cb882313-5084-4bd0-b5aa-25322ecd66ac-builder-dockercfg-gqxxs-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.674291 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cb882313-5084-4bd0-b5aa-25322ecd66ac-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.674349 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/cb882313-5084-4bd0-b5aa-25322ecd66ac-builder-dockercfg-gqxxs-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.674394 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/cb882313-5084-4bd0-b5aa-25322ecd66ac-builder-dockercfg-gqxxs-push\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.674492 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cb882313-5084-4bd0-b5aa-25322ecd66ac-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.675014 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.675270 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.675476 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.675552 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.675598 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.675718 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.675743 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7p5t\" (UniqueName: \"kubernetes.io/projected/cb882313-5084-4bd0-b5aa-25322ecd66ac-kube-api-access-p7p5t\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.675754 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.675822 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.675846 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.675907 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb882313-5084-4bd0-b5aa-25322ecd66ac-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.676011 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.676069 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb882313-5084-4bd0-b5aa-25322ecd66ac-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.676293 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.676698 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.677030 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.677578 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.681762 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/cb882313-5084-4bd0-b5aa-25322ecd66ac-builder-dockercfg-gqxxs-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.682653 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/cb882313-5084-4bd0-b5aa-25322ecd66ac-builder-dockercfg-gqxxs-push\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.695145 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7p5t\" (UniqueName: \"kubernetes.io/projected/cb882313-5084-4bd0-b5aa-25322ecd66ac-kube-api-access-p7p5t\") pod \"service-telemetry-operator-4-build\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:31 crc kubenswrapper[4824]: I0224 00:19:31.860248 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:32 crc kubenswrapper[4824]: I0224 00:19:32.090621 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Feb 24 00:19:32 crc kubenswrapper[4824]: I0224 00:19:32.162205 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"cb882313-5084-4bd0-b5aa-25322ecd66ac","Type":"ContainerStarted","Data":"805b4c2c99f251c12585542f59ae630605d117b692e0b9a0ce96358656635812"} Feb 24 00:19:33 crc kubenswrapper[4824]: I0224 00:19:33.173906 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"cb882313-5084-4bd0-b5aa-25322ecd66ac","Type":"ContainerStarted","Data":"1bc4d1fc862ebef78c7ee2786774d413f91471e8c4b13179ec6808d34cada1ed"} Feb 24 00:19:33 crc kubenswrapper[4824]: E0224 00:19:33.249903 4824 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=4548264831213021485, SKID=, AKID=10:68:53:25:D1:3D:2E:E3:54:0D:95:0B:6A:90:F1:BD:A9:4F:27:6E failed: x509: certificate signed by unknown authority" Feb 24 00:19:34 crc kubenswrapper[4824]: I0224 00:19:34.281693 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.186671 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-4-build" podUID="cb882313-5084-4bd0-b5aa-25322ecd66ac" containerName="git-clone" containerID="cri-o://1bc4d1fc862ebef78c7ee2786774d413f91471e8c4b13179ec6808d34cada1ed" gracePeriod=30 Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.582379 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_cb882313-5084-4bd0-b5aa-25322ecd66ac/git-clone/0.log" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.582828 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.642311 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-proxy-ca-bundles\") pod \"cb882313-5084-4bd0-b5aa-25322ecd66ac\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.642376 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/cb882313-5084-4bd0-b5aa-25322ecd66ac-builder-dockercfg-gqxxs-push\") pod \"cb882313-5084-4bd0-b5aa-25322ecd66ac\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.642453 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-ca-bundles\") pod \"cb882313-5084-4bd0-b5aa-25322ecd66ac\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.642622 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7p5t\" (UniqueName: \"kubernetes.io/projected/cb882313-5084-4bd0-b5aa-25322ecd66ac-kube-api-access-p7p5t\") pod \"cb882313-5084-4bd0-b5aa-25322ecd66ac\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.642655 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb882313-5084-4bd0-b5aa-25322ecd66ac-node-pullsecrets\") pod \"cb882313-5084-4bd0-b5aa-25322ecd66ac\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.642694 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-buildworkdir\") pod \"cb882313-5084-4bd0-b5aa-25322ecd66ac\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.642719 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-blob-cache\") pod \"cb882313-5084-4bd0-b5aa-25322ecd66ac\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.642751 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-container-storage-root\") pod \"cb882313-5084-4bd0-b5aa-25322ecd66ac\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.642796 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-container-storage-run\") pod \"cb882313-5084-4bd0-b5aa-25322ecd66ac\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.642819 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/cb882313-5084-4bd0-b5aa-25322ecd66ac-builder-dockercfg-gqxxs-pull\") pod \"cb882313-5084-4bd0-b5aa-25322ecd66ac\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.642866 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-system-configs\") pod \"cb882313-5084-4bd0-b5aa-25322ecd66ac\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.642895 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cb882313-5084-4bd0-b5aa-25322ecd66ac-buildcachedir\") pod \"cb882313-5084-4bd0-b5aa-25322ecd66ac\" (UID: \"cb882313-5084-4bd0-b5aa-25322ecd66ac\") " Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.643242 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb882313-5084-4bd0-b5aa-25322ecd66ac-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "cb882313-5084-4bd0-b5aa-25322ecd66ac" (UID: "cb882313-5084-4bd0-b5aa-25322ecd66ac"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.643233 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "cb882313-5084-4bd0-b5aa-25322ecd66ac" (UID: "cb882313-5084-4bd0-b5aa-25322ecd66ac"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.643668 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "cb882313-5084-4bd0-b5aa-25322ecd66ac" (UID: "cb882313-5084-4bd0-b5aa-25322ecd66ac"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.643775 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb882313-5084-4bd0-b5aa-25322ecd66ac-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "cb882313-5084-4bd0-b5aa-25322ecd66ac" (UID: "cb882313-5084-4bd0-b5aa-25322ecd66ac"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.644000 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "cb882313-5084-4bd0-b5aa-25322ecd66ac" (UID: "cb882313-5084-4bd0-b5aa-25322ecd66ac"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.644355 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "cb882313-5084-4bd0-b5aa-25322ecd66ac" (UID: "cb882313-5084-4bd0-b5aa-25322ecd66ac"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.644477 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "cb882313-5084-4bd0-b5aa-25322ecd66ac" (UID: "cb882313-5084-4bd0-b5aa-25322ecd66ac"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.644662 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "cb882313-5084-4bd0-b5aa-25322ecd66ac" (UID: "cb882313-5084-4bd0-b5aa-25322ecd66ac"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.644754 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "cb882313-5084-4bd0-b5aa-25322ecd66ac" (UID: "cb882313-5084-4bd0-b5aa-25322ecd66ac"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.652731 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb882313-5084-4bd0-b5aa-25322ecd66ac-builder-dockercfg-gqxxs-push" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-push") pod "cb882313-5084-4bd0-b5aa-25322ecd66ac" (UID: "cb882313-5084-4bd0-b5aa-25322ecd66ac"). InnerVolumeSpecName "builder-dockercfg-gqxxs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.652774 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb882313-5084-4bd0-b5aa-25322ecd66ac-builder-dockercfg-gqxxs-pull" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-pull") pod "cb882313-5084-4bd0-b5aa-25322ecd66ac" (UID: "cb882313-5084-4bd0-b5aa-25322ecd66ac"). InnerVolumeSpecName "builder-dockercfg-gqxxs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.652760 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb882313-5084-4bd0-b5aa-25322ecd66ac-kube-api-access-p7p5t" (OuterVolumeSpecName: "kube-api-access-p7p5t") pod "cb882313-5084-4bd0-b5aa-25322ecd66ac" (UID: "cb882313-5084-4bd0-b5aa-25322ecd66ac"). InnerVolumeSpecName "kube-api-access-p7p5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.744907 4824 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.744953 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7p5t\" (UniqueName: \"kubernetes.io/projected/cb882313-5084-4bd0-b5aa-25322ecd66ac-kube-api-access-p7p5t\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.744965 4824 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb882313-5084-4bd0-b5aa-25322ecd66ac-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.744974 4824 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.744983 4824 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.744993 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.745158 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cb882313-5084-4bd0-b5aa-25322ecd66ac-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.745869 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/cb882313-5084-4bd0-b5aa-25322ecd66ac-builder-dockercfg-gqxxs-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.745942 4824 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.745970 4824 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cb882313-5084-4bd0-b5aa-25322ecd66ac-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.746028 4824 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cb882313-5084-4bd0-b5aa-25322ecd66ac-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:35 crc kubenswrapper[4824]: I0224 00:19:35.746052 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/cb882313-5084-4bd0-b5aa-25322ecd66ac-builder-dockercfg-gqxxs-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:19:36 crc kubenswrapper[4824]: I0224 00:19:36.193870 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_cb882313-5084-4bd0-b5aa-25322ecd66ac/git-clone/0.log" Feb 24 00:19:36 crc kubenswrapper[4824]: I0224 00:19:36.194208 4824 generic.go:334] "Generic (PLEG): container finished" podID="cb882313-5084-4bd0-b5aa-25322ecd66ac" containerID="1bc4d1fc862ebef78c7ee2786774d413f91471e8c4b13179ec6808d34cada1ed" exitCode=1 Feb 24 00:19:36 crc kubenswrapper[4824]: I0224 00:19:36.194337 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"cb882313-5084-4bd0-b5aa-25322ecd66ac","Type":"ContainerDied","Data":"1bc4d1fc862ebef78c7ee2786774d413f91471e8c4b13179ec6808d34cada1ed"} Feb 24 00:19:36 crc kubenswrapper[4824]: I0224 00:19:36.194409 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:19:36 crc kubenswrapper[4824]: I0224 00:19:36.194454 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"cb882313-5084-4bd0-b5aa-25322ecd66ac","Type":"ContainerDied","Data":"805b4c2c99f251c12585542f59ae630605d117b692e0b9a0ce96358656635812"} Feb 24 00:19:36 crc kubenswrapper[4824]: I0224 00:19:36.194497 4824 scope.go:117] "RemoveContainer" containerID="1bc4d1fc862ebef78c7ee2786774d413f91471e8c4b13179ec6808d34cada1ed" Feb 24 00:19:36 crc kubenswrapper[4824]: I0224 00:19:36.232169 4824 scope.go:117] "RemoveContainer" containerID="1bc4d1fc862ebef78c7ee2786774d413f91471e8c4b13179ec6808d34cada1ed" Feb 24 00:19:36 crc kubenswrapper[4824]: E0224 00:19:36.233078 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bc4d1fc862ebef78c7ee2786774d413f91471e8c4b13179ec6808d34cada1ed\": container with ID starting with 1bc4d1fc862ebef78c7ee2786774d413f91471e8c4b13179ec6808d34cada1ed not found: ID does not exist" containerID="1bc4d1fc862ebef78c7ee2786774d413f91471e8c4b13179ec6808d34cada1ed" Feb 24 00:19:36 crc kubenswrapper[4824]: I0224 00:19:36.233117 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bc4d1fc862ebef78c7ee2786774d413f91471e8c4b13179ec6808d34cada1ed"} err="failed to get container status \"1bc4d1fc862ebef78c7ee2786774d413f91471e8c4b13179ec6808d34cada1ed\": rpc error: code = NotFound desc = could not find container \"1bc4d1fc862ebef78c7ee2786774d413f91471e8c4b13179ec6808d34cada1ed\": container with ID starting with 1bc4d1fc862ebef78c7ee2786774d413f91471e8c4b13179ec6808d34cada1ed not found: ID does not exist" Feb 24 00:19:36 crc kubenswrapper[4824]: I0224 00:19:36.235113 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Feb 24 00:19:36 crc kubenswrapper[4824]: I0224 00:19:36.242956 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Feb 24 00:19:36 crc kubenswrapper[4824]: I0224 00:19:36.704892 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb882313-5084-4bd0-b5aa-25322ecd66ac" path="/var/lib/kubelet/pods/cb882313-5084-4bd0-b5aa-25322ecd66ac/volumes" Feb 24 00:19:43 crc kubenswrapper[4824]: I0224 00:19:43.475947 4824 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.647132 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Feb 24 00:19:45 crc kubenswrapper[4824]: E0224 00:19:45.647690 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb882313-5084-4bd0-b5aa-25322ecd66ac" containerName="git-clone" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.647704 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb882313-5084-4bd0-b5aa-25322ecd66ac" containerName="git-clone" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.647832 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb882313-5084-4bd0-b5aa-25322ecd66ac" containerName="git-clone" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.648643 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.653259 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-5-sys-config" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.654135 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-5-global-ca" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.654392 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-gqxxs" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.654804 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-5-ca" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.673966 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.829476 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.829592 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e61a502-d012-4bfe-9788-440f95e757cf-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.829700 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7e61a502-d012-4bfe-9788-440f95e757cf-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.829770 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7e61a502-d012-4bfe-9788-440f95e757cf-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.829809 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.829844 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/7e61a502-d012-4bfe-9788-440f95e757cf-builder-dockercfg-gqxxs-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.829881 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e61a502-d012-4bfe-9788-440f95e757cf-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.829944 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/7e61a502-d012-4bfe-9788-440f95e757cf-builder-dockercfg-gqxxs-push\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.829970 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkdvw\" (UniqueName: \"kubernetes.io/projected/7e61a502-d012-4bfe-9788-440f95e757cf-kube-api-access-fkdvw\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.830026 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.830070 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.830088 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e61a502-d012-4bfe-9788-440f95e757cf-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.931835 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.931920 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e61a502-d012-4bfe-9788-440f95e757cf-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.931949 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.931981 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e61a502-d012-4bfe-9788-440f95e757cf-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.932009 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7e61a502-d012-4bfe-9788-440f95e757cf-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.932042 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7e61a502-d012-4bfe-9788-440f95e757cf-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.932067 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.932083 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/7e61a502-d012-4bfe-9788-440f95e757cf-builder-dockercfg-gqxxs-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.932101 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e61a502-d012-4bfe-9788-440f95e757cf-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.932128 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/7e61a502-d012-4bfe-9788-440f95e757cf-builder-dockercfg-gqxxs-push\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.932150 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkdvw\" (UniqueName: \"kubernetes.io/projected/7e61a502-d012-4bfe-9788-440f95e757cf-kube-api-access-fkdvw\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.932166 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e61a502-d012-4bfe-9788-440f95e757cf-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.932182 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.932589 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.932612 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.932807 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7e61a502-d012-4bfe-9788-440f95e757cf-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.932975 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.933105 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7e61a502-d012-4bfe-9788-440f95e757cf-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.933443 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e61a502-d012-4bfe-9788-440f95e757cf-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.933728 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.934276 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e61a502-d012-4bfe-9788-440f95e757cf-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.945466 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/7e61a502-d012-4bfe-9788-440f95e757cf-builder-dockercfg-gqxxs-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.946057 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/7e61a502-d012-4bfe-9788-440f95e757cf-builder-dockercfg-gqxxs-push\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.954606 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkdvw\" (UniqueName: \"kubernetes.io/projected/7e61a502-d012-4bfe-9788-440f95e757cf-kube-api-access-fkdvw\") pod \"service-telemetry-operator-5-build\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:45 crc kubenswrapper[4824]: I0224 00:19:45.966567 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:19:46 crc kubenswrapper[4824]: I0224 00:19:46.192861 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Feb 24 00:19:46 crc kubenswrapper[4824]: I0224 00:19:46.270073 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"7e61a502-d012-4bfe-9788-440f95e757cf","Type":"ContainerStarted","Data":"44273c89e677ea30f0737c8184f478f3798ea6ee49c1c3b6c1308da56ccd9ed5"} Feb 24 00:19:47 crc kubenswrapper[4824]: I0224 00:19:47.277950 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"7e61a502-d012-4bfe-9788-440f95e757cf","Type":"ContainerStarted","Data":"c64d5595559c578057b819d0d0c20e76d9b8f63ede84abcd57bbd5cb763e2039"} Feb 24 00:19:53 crc kubenswrapper[4824]: I0224 00:19:53.275982 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:19:53 crc kubenswrapper[4824]: I0224 00:19:53.277204 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:19:53 crc kubenswrapper[4824]: I0224 00:19:53.277278 4824 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:19:53 crc kubenswrapper[4824]: I0224 00:19:53.278559 4824 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"43fc5998f7ab77a1ca73519cb6a4280f5869d3a50153e1dc6202d26bc4d9b6a3"} pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:19:53 crc kubenswrapper[4824]: I0224 00:19:53.278625 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" containerID="cri-o://43fc5998f7ab77a1ca73519cb6a4280f5869d3a50153e1dc6202d26bc4d9b6a3" gracePeriod=600 Feb 24 00:19:54 crc kubenswrapper[4824]: I0224 00:19:54.340436 4824 generic.go:334] "Generic (PLEG): container finished" podID="7e61a502-d012-4bfe-9788-440f95e757cf" containerID="c64d5595559c578057b819d0d0c20e76d9b8f63ede84abcd57bbd5cb763e2039" exitCode=0 Feb 24 00:19:54 crc kubenswrapper[4824]: I0224 00:19:54.340603 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"7e61a502-d012-4bfe-9788-440f95e757cf","Type":"ContainerDied","Data":"c64d5595559c578057b819d0d0c20e76d9b8f63ede84abcd57bbd5cb763e2039"} Feb 24 00:19:54 crc kubenswrapper[4824]: I0224 00:19:54.347490 4824 generic.go:334] "Generic (PLEG): container finished" podID="939ca085-9383-42e6-b7d6-37f101137273" containerID="43fc5998f7ab77a1ca73519cb6a4280f5869d3a50153e1dc6202d26bc4d9b6a3" exitCode=0 Feb 24 00:19:54 crc kubenswrapper[4824]: I0224 00:19:54.347934 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerDied","Data":"43fc5998f7ab77a1ca73519cb6a4280f5869d3a50153e1dc6202d26bc4d9b6a3"} Feb 24 00:19:54 crc kubenswrapper[4824]: I0224 00:19:54.347970 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerStarted","Data":"960af4768f01705e18d424766fe08bb2ebb2088d821d2cb697f31ab6e24ccd28"} Feb 24 00:19:54 crc kubenswrapper[4824]: I0224 00:19:54.347994 4824 scope.go:117] "RemoveContainer" containerID="14f28b64a526a9334cfaacd13a3a23756d3ea46670a60bcfe695a7e80551056e" Feb 24 00:19:55 crc kubenswrapper[4824]: I0224 00:19:55.358405 4824 generic.go:334] "Generic (PLEG): container finished" podID="7e61a502-d012-4bfe-9788-440f95e757cf" containerID="ba082b7917d1ad0a20e76bc29b76ce7ae092dfc546fb0ff65abd6f53c2cf70f9" exitCode=0 Feb 24 00:19:55 crc kubenswrapper[4824]: I0224 00:19:55.358681 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"7e61a502-d012-4bfe-9788-440f95e757cf","Type":"ContainerDied","Data":"ba082b7917d1ad0a20e76bc29b76ce7ae092dfc546fb0ff65abd6f53c2cf70f9"} Feb 24 00:19:55 crc kubenswrapper[4824]: I0224 00:19:55.404597 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_7e61a502-d012-4bfe-9788-440f95e757cf/manage-dockerfile/0.log" Feb 24 00:19:56 crc kubenswrapper[4824]: I0224 00:19:56.380304 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"7e61a502-d012-4bfe-9788-440f95e757cf","Type":"ContainerStarted","Data":"17c294022e78aa9014d3c492430d5e04773952f0a4b192822b172553c6c46c17"} Feb 24 00:19:56 crc kubenswrapper[4824]: I0224 00:19:56.417093 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-5-build" podStartSLOduration=11.417058839 podStartE2EDuration="11.417058839s" podCreationTimestamp="2026-02-24 00:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:19:56.406247568 +0000 UTC m=+860.395872057" watchObservedRunningTime="2026-02-24 00:19:56.417058839 +0000 UTC m=+860.406683308" Feb 24 00:21:25 crc kubenswrapper[4824]: I0224 00:21:25.991552 4824 generic.go:334] "Generic (PLEG): container finished" podID="7e61a502-d012-4bfe-9788-440f95e757cf" containerID="17c294022e78aa9014d3c492430d5e04773952f0a4b192822b172553c6c46c17" exitCode=0 Feb 24 00:21:25 crc kubenswrapper[4824]: I0224 00:21:25.991616 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"7e61a502-d012-4bfe-9788-440f95e757cf","Type":"ContainerDied","Data":"17c294022e78aa9014d3c492430d5e04773952f0a4b192822b172553c6c46c17"} Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.251829 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.325024 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkdvw\" (UniqueName: \"kubernetes.io/projected/7e61a502-d012-4bfe-9788-440f95e757cf-kube-api-access-fkdvw\") pod \"7e61a502-d012-4bfe-9788-440f95e757cf\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.325124 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e61a502-d012-4bfe-9788-440f95e757cf-node-pullsecrets\") pod \"7e61a502-d012-4bfe-9788-440f95e757cf\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.325177 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-container-storage-root\") pod \"7e61a502-d012-4bfe-9788-440f95e757cf\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.325211 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7e61a502-d012-4bfe-9788-440f95e757cf-buildcachedir\") pod \"7e61a502-d012-4bfe-9788-440f95e757cf\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.325244 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/7e61a502-d012-4bfe-9788-440f95e757cf-builder-dockercfg-gqxxs-pull\") pod \"7e61a502-d012-4bfe-9788-440f95e757cf\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.325278 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7e61a502-d012-4bfe-9788-440f95e757cf-build-system-configs\") pod \"7e61a502-d012-4bfe-9788-440f95e757cf\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.325314 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e61a502-d012-4bfe-9788-440f95e757cf-build-ca-bundles\") pod \"7e61a502-d012-4bfe-9788-440f95e757cf\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.325342 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-build-blob-cache\") pod \"7e61a502-d012-4bfe-9788-440f95e757cf\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.325365 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-container-storage-run\") pod \"7e61a502-d012-4bfe-9788-440f95e757cf\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.325398 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e61a502-d012-4bfe-9788-440f95e757cf-build-proxy-ca-bundles\") pod \"7e61a502-d012-4bfe-9788-440f95e757cf\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.325455 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/7e61a502-d012-4bfe-9788-440f95e757cf-builder-dockercfg-gqxxs-push\") pod \"7e61a502-d012-4bfe-9788-440f95e757cf\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.325482 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-buildworkdir\") pod \"7e61a502-d012-4bfe-9788-440f95e757cf\" (UID: \"7e61a502-d012-4bfe-9788-440f95e757cf\") " Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.326574 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e61a502-d012-4bfe-9788-440f95e757cf-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "7e61a502-d012-4bfe-9788-440f95e757cf" (UID: "7e61a502-d012-4bfe-9788-440f95e757cf"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.326624 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e61a502-d012-4bfe-9788-440f95e757cf-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "7e61a502-d012-4bfe-9788-440f95e757cf" (UID: "7e61a502-d012-4bfe-9788-440f95e757cf"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.326661 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e61a502-d012-4bfe-9788-440f95e757cf-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "7e61a502-d012-4bfe-9788-440f95e757cf" (UID: "7e61a502-d012-4bfe-9788-440f95e757cf"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.327833 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e61a502-d012-4bfe-9788-440f95e757cf-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "7e61a502-d012-4bfe-9788-440f95e757cf" (UID: "7e61a502-d012-4bfe-9788-440f95e757cf"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.327871 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "7e61a502-d012-4bfe-9788-440f95e757cf" (UID: "7e61a502-d012-4bfe-9788-440f95e757cf"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.327859 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e61a502-d012-4bfe-9788-440f95e757cf-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "7e61a502-d012-4bfe-9788-440f95e757cf" (UID: "7e61a502-d012-4bfe-9788-440f95e757cf"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.336852 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e61a502-d012-4bfe-9788-440f95e757cf-builder-dockercfg-gqxxs-pull" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-pull") pod "7e61a502-d012-4bfe-9788-440f95e757cf" (UID: "7e61a502-d012-4bfe-9788-440f95e757cf"). InnerVolumeSpecName "builder-dockercfg-gqxxs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.336969 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e61a502-d012-4bfe-9788-440f95e757cf-kube-api-access-fkdvw" (OuterVolumeSpecName: "kube-api-access-fkdvw") pod "7e61a502-d012-4bfe-9788-440f95e757cf" (UID: "7e61a502-d012-4bfe-9788-440f95e757cf"). InnerVolumeSpecName "kube-api-access-fkdvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.337107 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e61a502-d012-4bfe-9788-440f95e757cf-builder-dockercfg-gqxxs-push" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-push") pod "7e61a502-d012-4bfe-9788-440f95e757cf" (UID: "7e61a502-d012-4bfe-9788-440f95e757cf"). InnerVolumeSpecName "builder-dockercfg-gqxxs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.366462 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "7e61a502-d012-4bfe-9788-440f95e757cf" (UID: "7e61a502-d012-4bfe-9788-440f95e757cf"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.427364 4824 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e61a502-d012-4bfe-9788-440f95e757cf-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.427408 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.427425 4824 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e61a502-d012-4bfe-9788-440f95e757cf-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.427437 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/7e61a502-d012-4bfe-9788-440f95e757cf-builder-dockercfg-gqxxs-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.427450 4824 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.427462 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkdvw\" (UniqueName: \"kubernetes.io/projected/7e61a502-d012-4bfe-9788-440f95e757cf-kube-api-access-fkdvw\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.427472 4824 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e61a502-d012-4bfe-9788-440f95e757cf-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.427485 4824 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7e61a502-d012-4bfe-9788-440f95e757cf-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.427496 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/7e61a502-d012-4bfe-9788-440f95e757cf-builder-dockercfg-gqxxs-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.427508 4824 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7e61a502-d012-4bfe-9788-440f95e757cf-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.516274 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "7e61a502-d012-4bfe-9788-440f95e757cf" (UID: "7e61a502-d012-4bfe-9788-440f95e757cf"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:21:27 crc kubenswrapper[4824]: I0224 00:21:27.529110 4824 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:28 crc kubenswrapper[4824]: I0224 00:21:28.008174 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"7e61a502-d012-4bfe-9788-440f95e757cf","Type":"ContainerDied","Data":"44273c89e677ea30f0737c8184f478f3798ea6ee49c1c3b6c1308da56ccd9ed5"} Feb 24 00:21:28 crc kubenswrapper[4824]: I0224 00:21:28.008226 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44273c89e677ea30f0737c8184f478f3798ea6ee49c1c3b6c1308da56ccd9ed5" Feb 24 00:21:28 crc kubenswrapper[4824]: I0224 00:21:28.008316 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:21:29 crc kubenswrapper[4824]: I0224 00:21:29.188741 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "7e61a502-d012-4bfe-9788-440f95e757cf" (UID: "7e61a502-d012-4bfe-9788-440f95e757cf"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:21:29 crc kubenswrapper[4824]: I0224 00:21:29.255443 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7e61a502-d012-4bfe-9788-440f95e757cf-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.291208 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 24 00:21:31 crc kubenswrapper[4824]: E0224 00:21:31.291541 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e61a502-d012-4bfe-9788-440f95e757cf" containerName="manage-dockerfile" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.291553 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e61a502-d012-4bfe-9788-440f95e757cf" containerName="manage-dockerfile" Feb 24 00:21:31 crc kubenswrapper[4824]: E0224 00:21:31.291564 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e61a502-d012-4bfe-9788-440f95e757cf" containerName="git-clone" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.291571 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e61a502-d012-4bfe-9788-440f95e757cf" containerName="git-clone" Feb 24 00:21:31 crc kubenswrapper[4824]: E0224 00:21:31.291581 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e61a502-d012-4bfe-9788-440f95e757cf" containerName="docker-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.291588 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e61a502-d012-4bfe-9788-440f95e757cf" containerName="docker-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.291743 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e61a502-d012-4bfe-9788-440f95e757cf" containerName="docker-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.292775 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.296546 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.297919 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-gqxxs" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.299349 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.301388 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.313871 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.389598 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.389668 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7c94136e-3210-48f4-bd2c-cbfb25d117b6-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.389703 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.389785 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/7c94136e-3210-48f4-bd2c-cbfb25d117b6-builder-dockercfg-gqxxs-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.389821 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7znl\" (UniqueName: \"kubernetes.io/projected/7c94136e-3210-48f4-bd2c-cbfb25d117b6-kube-api-access-n7znl\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.389847 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.389869 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.389890 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.390024 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.390128 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/7c94136e-3210-48f4-bd2c-cbfb25d117b6-builder-dockercfg-gqxxs-push\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.390279 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7c94136e-3210-48f4-bd2c-cbfb25d117b6-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.390318 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.492506 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/7c94136e-3210-48f4-bd2c-cbfb25d117b6-builder-dockercfg-gqxxs-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.492596 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7znl\" (UniqueName: \"kubernetes.io/projected/7c94136e-3210-48f4-bd2c-cbfb25d117b6-kube-api-access-n7znl\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.492638 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.492669 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.492693 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.492726 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.492764 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/7c94136e-3210-48f4-bd2c-cbfb25d117b6-builder-dockercfg-gqxxs-push\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.492820 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7c94136e-3210-48f4-bd2c-cbfb25d117b6-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.492855 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.492915 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.492944 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7c94136e-3210-48f4-bd2c-cbfb25d117b6-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.492973 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.493491 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7c94136e-3210-48f4-bd2c-cbfb25d117b6-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.493549 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7c94136e-3210-48f4-bd2c-cbfb25d117b6-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.493747 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.494031 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.494196 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.494640 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.494693 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.494930 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.494992 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.502302 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/7c94136e-3210-48f4-bd2c-cbfb25d117b6-builder-dockercfg-gqxxs-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.502320 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/7c94136e-3210-48f4-bd2c-cbfb25d117b6-builder-dockercfg-gqxxs-push\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.514437 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7znl\" (UniqueName: \"kubernetes.io/projected/7c94136e-3210-48f4-bd2c-cbfb25d117b6-kube-api-access-n7znl\") pod \"smart-gateway-operator-1-build\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.615095 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:31 crc kubenswrapper[4824]: I0224 00:21:31.868215 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 24 00:21:32 crc kubenswrapper[4824]: I0224 00:21:32.044001 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"7c94136e-3210-48f4-bd2c-cbfb25d117b6","Type":"ContainerStarted","Data":"38ae562907291985af4b9d5f5cf7656a88ee4ee3a1caf482cc7dab2a3e8b6f2c"} Feb 24 00:21:33 crc kubenswrapper[4824]: I0224 00:21:33.052317 4824 generic.go:334] "Generic (PLEG): container finished" podID="7c94136e-3210-48f4-bd2c-cbfb25d117b6" containerID="a826809630bf7ea89dbd405b13377d29a8fc6004a50e897939509359da21749a" exitCode=0 Feb 24 00:21:33 crc kubenswrapper[4824]: I0224 00:21:33.052790 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"7c94136e-3210-48f4-bd2c-cbfb25d117b6","Type":"ContainerDied","Data":"a826809630bf7ea89dbd405b13377d29a8fc6004a50e897939509359da21749a"} Feb 24 00:21:34 crc kubenswrapper[4824]: I0224 00:21:34.062982 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"7c94136e-3210-48f4-bd2c-cbfb25d117b6","Type":"ContainerStarted","Data":"39243a21b42b77e3a813ff45c21060c950e9e46e56e45b01ee3b7c5543bc3f0d"} Feb 24 00:21:41 crc kubenswrapper[4824]: I0224 00:21:41.723493 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=10.723474433 podStartE2EDuration="10.723474433s" podCreationTimestamp="2026-02-24 00:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:21:34.092099572 +0000 UTC m=+958.081724041" watchObservedRunningTime="2026-02-24 00:21:41.723474433 +0000 UTC m=+965.713098902" Feb 24 00:21:41 crc kubenswrapper[4824]: I0224 00:21:41.728595 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 24 00:21:41 crc kubenswrapper[4824]: I0224 00:21:41.728818 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="7c94136e-3210-48f4-bd2c-cbfb25d117b6" containerName="docker-build" containerID="cri-o://39243a21b42b77e3a813ff45c21060c950e9e46e56e45b01ee3b7c5543bc3f0d" gracePeriod=30 Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.381024 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.382965 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.385736 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-global-ca" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.386029 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-sys-config" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.386204 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-ca" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.405854 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.493990 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/87c4f157-f66c-485a-b29d-db482b59c2a1-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.494431 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/87c4f157-f66c-485a-b29d-db482b59c2a1-builder-dockercfg-gqxxs-push\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.494547 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.494703 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.494798 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87c4f157-f66c-485a-b29d-db482b59c2a1-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.495831 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/87c4f157-f66c-485a-b29d-db482b59c2a1-builder-dockercfg-gqxxs-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.495908 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49jr7\" (UniqueName: \"kubernetes.io/projected/87c4f157-f66c-485a-b29d-db482b59c2a1-kube-api-access-49jr7\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.496068 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/87c4f157-f66c-485a-b29d-db482b59c2a1-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.496159 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.496213 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87c4f157-f66c-485a-b29d-db482b59c2a1-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.496270 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/87c4f157-f66c-485a-b29d-db482b59c2a1-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.496290 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.598473 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.598568 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87c4f157-f66c-485a-b29d-db482b59c2a1-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.598616 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/87c4f157-f66c-485a-b29d-db482b59c2a1-builder-dockercfg-gqxxs-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.598641 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49jr7\" (UniqueName: \"kubernetes.io/projected/87c4f157-f66c-485a-b29d-db482b59c2a1-kube-api-access-49jr7\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.598676 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/87c4f157-f66c-485a-b29d-db482b59c2a1-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.598709 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.598740 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87c4f157-f66c-485a-b29d-db482b59c2a1-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.598773 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/87c4f157-f66c-485a-b29d-db482b59c2a1-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.598796 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.598824 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/87c4f157-f66c-485a-b29d-db482b59c2a1-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.598880 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/87c4f157-f66c-485a-b29d-db482b59c2a1-builder-dockercfg-gqxxs-push\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.598913 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.599369 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/87c4f157-f66c-485a-b29d-db482b59c2a1-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.599400 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/87c4f157-f66c-485a-b29d-db482b59c2a1-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.599601 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.599633 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.599754 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.600269 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/87c4f157-f66c-485a-b29d-db482b59c2a1-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.600457 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87c4f157-f66c-485a-b29d-db482b59c2a1-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.600660 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87c4f157-f66c-485a-b29d-db482b59c2a1-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.600906 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.610731 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/87c4f157-f66c-485a-b29d-db482b59c2a1-builder-dockercfg-gqxxs-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.611009 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/87c4f157-f66c-485a-b29d-db482b59c2a1-builder-dockercfg-gqxxs-push\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.636694 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49jr7\" (UniqueName: \"kubernetes.io/projected/87c4f157-f66c-485a-b29d-db482b59c2a1-kube-api-access-49jr7\") pod \"smart-gateway-operator-2-build\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.705966 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:21:43 crc kubenswrapper[4824]: I0224 00:21:43.928127 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Feb 24 00:21:44 crc kubenswrapper[4824]: I0224 00:21:44.158075 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"87c4f157-f66c-485a-b29d-db482b59c2a1","Type":"ContainerStarted","Data":"360ddab163ca47c1a0fb0338c2f478a6b9ea9e3e2cf8ffd6966eab886336bcb9"} Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.270295 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_7c94136e-3210-48f4-bd2c-cbfb25d117b6/docker-build/0.log" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.271625 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.288415 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_7c94136e-3210-48f4-bd2c-cbfb25d117b6/docker-build/0.log" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.289234 4824 generic.go:334] "Generic (PLEG): container finished" podID="7c94136e-3210-48f4-bd2c-cbfb25d117b6" containerID="39243a21b42b77e3a813ff45c21060c950e9e46e56e45b01ee3b7c5543bc3f0d" exitCode=1 Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.289287 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"7c94136e-3210-48f4-bd2c-cbfb25d117b6","Type":"ContainerDied","Data":"39243a21b42b77e3a813ff45c21060c950e9e46e56e45b01ee3b7c5543bc3f0d"} Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.289327 4824 scope.go:117] "RemoveContainer" containerID="39243a21b42b77e3a813ff45c21060c950e9e46e56e45b01ee3b7c5543bc3f0d" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.360322 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-system-configs\") pod \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.360392 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-proxy-ca-bundles\") pod \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.360431 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/7c94136e-3210-48f4-bd2c-cbfb25d117b6-builder-dockercfg-gqxxs-push\") pod \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.360494 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-blob-cache\") pod \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.360573 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7c94136e-3210-48f4-bd2c-cbfb25d117b6-buildcachedir\") pod \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.360611 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/7c94136e-3210-48f4-bd2c-cbfb25d117b6-builder-dockercfg-gqxxs-pull\") pod \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.360639 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7znl\" (UniqueName: \"kubernetes.io/projected/7c94136e-3210-48f4-bd2c-cbfb25d117b6-kube-api-access-n7znl\") pod \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.360720 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-container-storage-run\") pod \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.360775 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-buildworkdir\") pod \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.360799 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7c94136e-3210-48f4-bd2c-cbfb25d117b6-node-pullsecrets\") pod \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.360826 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-container-storage-root\") pod \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.360850 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-ca-bundles\") pod \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\" (UID: \"7c94136e-3210-48f4-bd2c-cbfb25d117b6\") " Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.362175 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "7c94136e-3210-48f4-bd2c-cbfb25d117b6" (UID: "7c94136e-3210-48f4-bd2c-cbfb25d117b6"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.362336 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "7c94136e-3210-48f4-bd2c-cbfb25d117b6" (UID: "7c94136e-3210-48f4-bd2c-cbfb25d117b6"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.362377 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c94136e-3210-48f4-bd2c-cbfb25d117b6-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "7c94136e-3210-48f4-bd2c-cbfb25d117b6" (UID: "7c94136e-3210-48f4-bd2c-cbfb25d117b6"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.362524 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "7c94136e-3210-48f4-bd2c-cbfb25d117b6" (UID: "7c94136e-3210-48f4-bd2c-cbfb25d117b6"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.362561 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c94136e-3210-48f4-bd2c-cbfb25d117b6-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "7c94136e-3210-48f4-bd2c-cbfb25d117b6" (UID: "7c94136e-3210-48f4-bd2c-cbfb25d117b6"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.363305 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "7c94136e-3210-48f4-bd2c-cbfb25d117b6" (UID: "7c94136e-3210-48f4-bd2c-cbfb25d117b6"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.378151 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "7c94136e-3210-48f4-bd2c-cbfb25d117b6" (UID: "7c94136e-3210-48f4-bd2c-cbfb25d117b6"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.387040 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c94136e-3210-48f4-bd2c-cbfb25d117b6-builder-dockercfg-gqxxs-push" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-push") pod "7c94136e-3210-48f4-bd2c-cbfb25d117b6" (UID: "7c94136e-3210-48f4-bd2c-cbfb25d117b6"). InnerVolumeSpecName "builder-dockercfg-gqxxs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.387088 4824 scope.go:117] "RemoveContainer" containerID="a826809630bf7ea89dbd405b13377d29a8fc6004a50e897939509359da21749a" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.387091 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c94136e-3210-48f4-bd2c-cbfb25d117b6-kube-api-access-n7znl" (OuterVolumeSpecName: "kube-api-access-n7znl") pod "7c94136e-3210-48f4-bd2c-cbfb25d117b6" (UID: "7c94136e-3210-48f4-bd2c-cbfb25d117b6"). InnerVolumeSpecName "kube-api-access-n7znl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.392883 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c94136e-3210-48f4-bd2c-cbfb25d117b6-builder-dockercfg-gqxxs-pull" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-pull") pod "7c94136e-3210-48f4-bd2c-cbfb25d117b6" (UID: "7c94136e-3210-48f4-bd2c-cbfb25d117b6"). InnerVolumeSpecName "builder-dockercfg-gqxxs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.462632 4824 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.462693 4824 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.462706 4824 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7c94136e-3210-48f4-bd2c-cbfb25d117b6-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.462716 4824 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.462728 4824 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.462738 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/7c94136e-3210-48f4-bd2c-cbfb25d117b6-builder-dockercfg-gqxxs-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.462750 4824 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7c94136e-3210-48f4-bd2c-cbfb25d117b6-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.462762 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/7c94136e-3210-48f4-bd2c-cbfb25d117b6-builder-dockercfg-gqxxs-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.462772 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7znl\" (UniqueName: \"kubernetes.io/projected/7c94136e-3210-48f4-bd2c-cbfb25d117b6-kube-api-access-n7znl\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.462781 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.763689 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "7c94136e-3210-48f4-bd2c-cbfb25d117b6" (UID: "7c94136e-3210-48f4-bd2c-cbfb25d117b6"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.768033 4824 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.770723 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "7c94136e-3210-48f4-bd2c-cbfb25d117b6" (UID: "7c94136e-3210-48f4-bd2c-cbfb25d117b6"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:21:47 crc kubenswrapper[4824]: I0224 00:21:47.869511 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7c94136e-3210-48f4-bd2c-cbfb25d117b6-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:48 crc kubenswrapper[4824]: I0224 00:21:48.297902 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 24 00:21:48 crc kubenswrapper[4824]: I0224 00:21:48.297894 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"7c94136e-3210-48f4-bd2c-cbfb25d117b6","Type":"ContainerDied","Data":"38ae562907291985af4b9d5f5cf7656a88ee4ee3a1caf482cc7dab2a3e8b6f2c"} Feb 24 00:21:48 crc kubenswrapper[4824]: I0224 00:21:48.302342 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"87c4f157-f66c-485a-b29d-db482b59c2a1","Type":"ContainerStarted","Data":"45ae95c2644ed487ab9447779612d358e7e091ad8916d58155d5c783400a2e9d"} Feb 24 00:21:48 crc kubenswrapper[4824]: I0224 00:21:48.378036 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 24 00:21:48 crc kubenswrapper[4824]: I0224 00:21:48.388247 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 24 00:21:48 crc kubenswrapper[4824]: E0224 00:21:48.388690 4824 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c94136e_3210_48f4_bd2c_cbfb25d117b6.slice/crio-38ae562907291985af4b9d5f5cf7656a88ee4ee3a1caf482cc7dab2a3e8b6f2c\": RecentStats: unable to find data in memory cache]" Feb 24 00:21:48 crc kubenswrapper[4824]: I0224 00:21:48.702378 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c94136e-3210-48f4-bd2c-cbfb25d117b6" path="/var/lib/kubelet/pods/7c94136e-3210-48f4-bd2c-cbfb25d117b6/volumes" Feb 24 00:21:49 crc kubenswrapper[4824]: I0224 00:21:49.311755 4824 generic.go:334] "Generic (PLEG): container finished" podID="87c4f157-f66c-485a-b29d-db482b59c2a1" containerID="45ae95c2644ed487ab9447779612d358e7e091ad8916d58155d5c783400a2e9d" exitCode=0 Feb 24 00:21:49 crc kubenswrapper[4824]: I0224 00:21:49.311833 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"87c4f157-f66c-485a-b29d-db482b59c2a1","Type":"ContainerDied","Data":"45ae95c2644ed487ab9447779612d358e7e091ad8916d58155d5c783400a2e9d"} Feb 24 00:21:50 crc kubenswrapper[4824]: I0224 00:21:50.321879 4824 generic.go:334] "Generic (PLEG): container finished" podID="87c4f157-f66c-485a-b29d-db482b59c2a1" containerID="cd0d73920f56d7da26bfcf6fd489e39d2a6046d8d8b6ab81f5c0c605a8da38fd" exitCode=0 Feb 24 00:21:50 crc kubenswrapper[4824]: I0224 00:21:50.322026 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"87c4f157-f66c-485a-b29d-db482b59c2a1","Type":"ContainerDied","Data":"cd0d73920f56d7da26bfcf6fd489e39d2a6046d8d8b6ab81f5c0c605a8da38fd"} Feb 24 00:21:50 crc kubenswrapper[4824]: I0224 00:21:50.367343 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_87c4f157-f66c-485a-b29d-db482b59c2a1/manage-dockerfile/0.log" Feb 24 00:21:51 crc kubenswrapper[4824]: I0224 00:21:51.331625 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"87c4f157-f66c-485a-b29d-db482b59c2a1","Type":"ContainerStarted","Data":"9aa60fad0c5e5b125c85a695ee9699aa7edfa41691acaf3323a949446eac0c81"} Feb 24 00:21:51 crc kubenswrapper[4824]: I0224 00:21:51.367262 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=8.367238147 podStartE2EDuration="8.367238147s" podCreationTimestamp="2026-02-24 00:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:21:51.360443943 +0000 UTC m=+975.350068412" watchObservedRunningTime="2026-02-24 00:21:51.367238147 +0000 UTC m=+975.356862626" Feb 24 00:21:53 crc kubenswrapper[4824]: I0224 00:21:53.276358 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:21:53 crc kubenswrapper[4824]: I0224 00:21:53.276693 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:21:56 crc kubenswrapper[4824]: I0224 00:21:56.612023 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q2g4m"] Feb 24 00:21:56 crc kubenswrapper[4824]: E0224 00:21:56.612434 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c94136e-3210-48f4-bd2c-cbfb25d117b6" containerName="manage-dockerfile" Feb 24 00:21:56 crc kubenswrapper[4824]: I0224 00:21:56.612457 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c94136e-3210-48f4-bd2c-cbfb25d117b6" containerName="manage-dockerfile" Feb 24 00:21:56 crc kubenswrapper[4824]: E0224 00:21:56.612482 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c94136e-3210-48f4-bd2c-cbfb25d117b6" containerName="docker-build" Feb 24 00:21:56 crc kubenswrapper[4824]: I0224 00:21:56.612496 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c94136e-3210-48f4-bd2c-cbfb25d117b6" containerName="docker-build" Feb 24 00:21:56 crc kubenswrapper[4824]: I0224 00:21:56.612705 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c94136e-3210-48f4-bd2c-cbfb25d117b6" containerName="docker-build" Feb 24 00:21:56 crc kubenswrapper[4824]: I0224 00:21:56.614046 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2g4m" Feb 24 00:21:56 crc kubenswrapper[4824]: I0224 00:21:56.630204 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2g4m"] Feb 24 00:21:56 crc kubenswrapper[4824]: I0224 00:21:56.713073 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e34f5eb-6bdc-406e-aaed-ff979c64f6db-utilities\") pod \"redhat-operators-q2g4m\" (UID: \"5e34f5eb-6bdc-406e-aaed-ff979c64f6db\") " pod="openshift-marketplace/redhat-operators-q2g4m" Feb 24 00:21:56 crc kubenswrapper[4824]: I0224 00:21:56.713703 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e34f5eb-6bdc-406e-aaed-ff979c64f6db-catalog-content\") pod \"redhat-operators-q2g4m\" (UID: \"5e34f5eb-6bdc-406e-aaed-ff979c64f6db\") " pod="openshift-marketplace/redhat-operators-q2g4m" Feb 24 00:21:56 crc kubenswrapper[4824]: I0224 00:21:56.713909 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq7jv\" (UniqueName: \"kubernetes.io/projected/5e34f5eb-6bdc-406e-aaed-ff979c64f6db-kube-api-access-lq7jv\") pod \"redhat-operators-q2g4m\" (UID: \"5e34f5eb-6bdc-406e-aaed-ff979c64f6db\") " pod="openshift-marketplace/redhat-operators-q2g4m" Feb 24 00:21:56 crc kubenswrapper[4824]: I0224 00:21:56.815854 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e34f5eb-6bdc-406e-aaed-ff979c64f6db-utilities\") pod \"redhat-operators-q2g4m\" (UID: \"5e34f5eb-6bdc-406e-aaed-ff979c64f6db\") " pod="openshift-marketplace/redhat-operators-q2g4m" Feb 24 00:21:56 crc kubenswrapper[4824]: I0224 00:21:56.815922 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e34f5eb-6bdc-406e-aaed-ff979c64f6db-catalog-content\") pod \"redhat-operators-q2g4m\" (UID: \"5e34f5eb-6bdc-406e-aaed-ff979c64f6db\") " pod="openshift-marketplace/redhat-operators-q2g4m" Feb 24 00:21:56 crc kubenswrapper[4824]: I0224 00:21:56.815969 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq7jv\" (UniqueName: \"kubernetes.io/projected/5e34f5eb-6bdc-406e-aaed-ff979c64f6db-kube-api-access-lq7jv\") pod \"redhat-operators-q2g4m\" (UID: \"5e34f5eb-6bdc-406e-aaed-ff979c64f6db\") " pod="openshift-marketplace/redhat-operators-q2g4m" Feb 24 00:21:56 crc kubenswrapper[4824]: I0224 00:21:56.817231 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e34f5eb-6bdc-406e-aaed-ff979c64f6db-utilities\") pod \"redhat-operators-q2g4m\" (UID: \"5e34f5eb-6bdc-406e-aaed-ff979c64f6db\") " pod="openshift-marketplace/redhat-operators-q2g4m" Feb 24 00:21:56 crc kubenswrapper[4824]: I0224 00:21:56.817752 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e34f5eb-6bdc-406e-aaed-ff979c64f6db-catalog-content\") pod \"redhat-operators-q2g4m\" (UID: \"5e34f5eb-6bdc-406e-aaed-ff979c64f6db\") " pod="openshift-marketplace/redhat-operators-q2g4m" Feb 24 00:21:56 crc kubenswrapper[4824]: I0224 00:21:56.843627 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq7jv\" (UniqueName: \"kubernetes.io/projected/5e34f5eb-6bdc-406e-aaed-ff979c64f6db-kube-api-access-lq7jv\") pod \"redhat-operators-q2g4m\" (UID: \"5e34f5eb-6bdc-406e-aaed-ff979c64f6db\") " pod="openshift-marketplace/redhat-operators-q2g4m" Feb 24 00:21:56 crc kubenswrapper[4824]: I0224 00:21:56.940223 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2g4m" Feb 24 00:21:57 crc kubenswrapper[4824]: I0224 00:21:57.385456 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2g4m"] Feb 24 00:21:57 crc kubenswrapper[4824]: W0224 00:21:57.391725 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e34f5eb_6bdc_406e_aaed_ff979c64f6db.slice/crio-c154e571981648f7115b3e1d513a6874710dbdb0e4c9bd37e0e746f9c26709ae WatchSource:0}: Error finding container c154e571981648f7115b3e1d513a6874710dbdb0e4c9bd37e0e746f9c26709ae: Status 404 returned error can't find the container with id c154e571981648f7115b3e1d513a6874710dbdb0e4c9bd37e0e746f9c26709ae Feb 24 00:21:58 crc kubenswrapper[4824]: I0224 00:21:58.381495 4824 generic.go:334] "Generic (PLEG): container finished" podID="5e34f5eb-6bdc-406e-aaed-ff979c64f6db" containerID="465fc0bdabcaaeddc87638959d003ee0489fecdf8b29583f92b701894d90d1f2" exitCode=0 Feb 24 00:21:58 crc kubenswrapper[4824]: I0224 00:21:58.381574 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2g4m" event={"ID":"5e34f5eb-6bdc-406e-aaed-ff979c64f6db","Type":"ContainerDied","Data":"465fc0bdabcaaeddc87638959d003ee0489fecdf8b29583f92b701894d90d1f2"} Feb 24 00:21:58 crc kubenswrapper[4824]: I0224 00:21:58.381643 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2g4m" event={"ID":"5e34f5eb-6bdc-406e-aaed-ff979c64f6db","Type":"ContainerStarted","Data":"c154e571981648f7115b3e1d513a6874710dbdb0e4c9bd37e0e746f9c26709ae"} Feb 24 00:21:59 crc kubenswrapper[4824]: I0224 00:21:59.391502 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2g4m" event={"ID":"5e34f5eb-6bdc-406e-aaed-ff979c64f6db","Type":"ContainerStarted","Data":"5bd458f8ab6e18655e507e975fc2e2718159f8ffd4e3a3ea392e57cab31af1b7"} Feb 24 00:22:00 crc kubenswrapper[4824]: I0224 00:22:00.405931 4824 generic.go:334] "Generic (PLEG): container finished" podID="5e34f5eb-6bdc-406e-aaed-ff979c64f6db" containerID="5bd458f8ab6e18655e507e975fc2e2718159f8ffd4e3a3ea392e57cab31af1b7" exitCode=0 Feb 24 00:22:00 crc kubenswrapper[4824]: I0224 00:22:00.405992 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2g4m" event={"ID":"5e34f5eb-6bdc-406e-aaed-ff979c64f6db","Type":"ContainerDied","Data":"5bd458f8ab6e18655e507e975fc2e2718159f8ffd4e3a3ea392e57cab31af1b7"} Feb 24 00:22:09 crc kubenswrapper[4824]: I0224 00:22:09.392675 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lffjr"] Feb 24 00:22:09 crc kubenswrapper[4824]: I0224 00:22:09.395363 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lffjr" Feb 24 00:22:09 crc kubenswrapper[4824]: I0224 00:22:09.408928 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lffjr"] Feb 24 00:22:09 crc kubenswrapper[4824]: I0224 00:22:09.420375 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5daf2179-5386-4221-a15d-0e9787959357-catalog-content\") pod \"community-operators-lffjr\" (UID: \"5daf2179-5386-4221-a15d-0e9787959357\") " pod="openshift-marketplace/community-operators-lffjr" Feb 24 00:22:09 crc kubenswrapper[4824]: I0224 00:22:09.420441 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5daf2179-5386-4221-a15d-0e9787959357-utilities\") pod \"community-operators-lffjr\" (UID: \"5daf2179-5386-4221-a15d-0e9787959357\") " pod="openshift-marketplace/community-operators-lffjr" Feb 24 00:22:09 crc kubenswrapper[4824]: I0224 00:22:09.420474 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4vxp\" (UniqueName: \"kubernetes.io/projected/5daf2179-5386-4221-a15d-0e9787959357-kube-api-access-z4vxp\") pod \"community-operators-lffjr\" (UID: \"5daf2179-5386-4221-a15d-0e9787959357\") " pod="openshift-marketplace/community-operators-lffjr" Feb 24 00:22:09 crc kubenswrapper[4824]: I0224 00:22:09.521697 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4vxp\" (UniqueName: \"kubernetes.io/projected/5daf2179-5386-4221-a15d-0e9787959357-kube-api-access-z4vxp\") pod \"community-operators-lffjr\" (UID: \"5daf2179-5386-4221-a15d-0e9787959357\") " pod="openshift-marketplace/community-operators-lffjr" Feb 24 00:22:09 crc kubenswrapper[4824]: I0224 00:22:09.521877 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5daf2179-5386-4221-a15d-0e9787959357-catalog-content\") pod \"community-operators-lffjr\" (UID: \"5daf2179-5386-4221-a15d-0e9787959357\") " pod="openshift-marketplace/community-operators-lffjr" Feb 24 00:22:09 crc kubenswrapper[4824]: I0224 00:22:09.521915 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5daf2179-5386-4221-a15d-0e9787959357-utilities\") pod \"community-operators-lffjr\" (UID: \"5daf2179-5386-4221-a15d-0e9787959357\") " pod="openshift-marketplace/community-operators-lffjr" Feb 24 00:22:09 crc kubenswrapper[4824]: I0224 00:22:09.522801 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5daf2179-5386-4221-a15d-0e9787959357-catalog-content\") pod \"community-operators-lffjr\" (UID: \"5daf2179-5386-4221-a15d-0e9787959357\") " pod="openshift-marketplace/community-operators-lffjr" Feb 24 00:22:09 crc kubenswrapper[4824]: I0224 00:22:09.522807 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5daf2179-5386-4221-a15d-0e9787959357-utilities\") pod \"community-operators-lffjr\" (UID: \"5daf2179-5386-4221-a15d-0e9787959357\") " pod="openshift-marketplace/community-operators-lffjr" Feb 24 00:22:09 crc kubenswrapper[4824]: I0224 00:22:09.550828 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4vxp\" (UniqueName: \"kubernetes.io/projected/5daf2179-5386-4221-a15d-0e9787959357-kube-api-access-z4vxp\") pod \"community-operators-lffjr\" (UID: \"5daf2179-5386-4221-a15d-0e9787959357\") " pod="openshift-marketplace/community-operators-lffjr" Feb 24 00:22:09 crc kubenswrapper[4824]: I0224 00:22:09.714836 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lffjr" Feb 24 00:22:10 crc kubenswrapper[4824]: I0224 00:22:10.465945 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lffjr"] Feb 24 00:22:10 crc kubenswrapper[4824]: I0224 00:22:10.480844 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2g4m" event={"ID":"5e34f5eb-6bdc-406e-aaed-ff979c64f6db","Type":"ContainerStarted","Data":"831c0b295bbcc0eee583b3247a887cdd5a9f7abc2fafc098ba177ebb72c41c62"} Feb 24 00:22:11 crc kubenswrapper[4824]: I0224 00:22:11.488602 4824 generic.go:334] "Generic (PLEG): container finished" podID="5daf2179-5386-4221-a15d-0e9787959357" containerID="4323a297be2acca3f04764d788a2723c01de5977be24510091e6dae18368cf46" exitCode=0 Feb 24 00:22:11 crc kubenswrapper[4824]: I0224 00:22:11.488675 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lffjr" event={"ID":"5daf2179-5386-4221-a15d-0e9787959357","Type":"ContainerDied","Data":"4323a297be2acca3f04764d788a2723c01de5977be24510091e6dae18368cf46"} Feb 24 00:22:11 crc kubenswrapper[4824]: I0224 00:22:11.489256 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lffjr" event={"ID":"5daf2179-5386-4221-a15d-0e9787959357","Type":"ContainerStarted","Data":"fd0d534e6cc3fbc4eb96094ad2d6f52ebe74dd8f3e98453192847a75f0ffa856"} Feb 24 00:22:11 crc kubenswrapper[4824]: I0224 00:22:11.537879 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q2g4m" podStartSLOduration=3.719070388 podStartE2EDuration="15.537849884s" podCreationTimestamp="2026-02-24 00:21:56 +0000 UTC" firstStartedPulling="2026-02-24 00:21:58.383782432 +0000 UTC m=+982.373406901" lastFinishedPulling="2026-02-24 00:22:10.202561928 +0000 UTC m=+994.192186397" observedRunningTime="2026-02-24 00:22:11.536296975 +0000 UTC m=+995.525921474" watchObservedRunningTime="2026-02-24 00:22:11.537849884 +0000 UTC m=+995.527474353" Feb 24 00:22:14 crc kubenswrapper[4824]: I0224 00:22:14.513014 4824 generic.go:334] "Generic (PLEG): container finished" podID="5daf2179-5386-4221-a15d-0e9787959357" containerID="e9f8f2ac56a9a9220e0f2fcf3de55cdda8261bdd3eb977d2fca357bf9c03b9e0" exitCode=0 Feb 24 00:22:14 crc kubenswrapper[4824]: I0224 00:22:14.513227 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lffjr" event={"ID":"5daf2179-5386-4221-a15d-0e9787959357","Type":"ContainerDied","Data":"e9f8f2ac56a9a9220e0f2fcf3de55cdda8261bdd3eb977d2fca357bf9c03b9e0"} Feb 24 00:22:15 crc kubenswrapper[4824]: I0224 00:22:15.523860 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lffjr" event={"ID":"5daf2179-5386-4221-a15d-0e9787959357","Type":"ContainerStarted","Data":"41227a05a63831cc63a8bba2e9c77df4d85286d5e66dfda245d5fa542a2ef56c"} Feb 24 00:22:15 crc kubenswrapper[4824]: I0224 00:22:15.543306 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lffjr" podStartSLOduration=3.038876405 podStartE2EDuration="6.543281105s" podCreationTimestamp="2026-02-24 00:22:09 +0000 UTC" firstStartedPulling="2026-02-24 00:22:11.490779743 +0000 UTC m=+995.480404212" lastFinishedPulling="2026-02-24 00:22:14.995184443 +0000 UTC m=+998.984808912" observedRunningTime="2026-02-24 00:22:15.542823973 +0000 UTC m=+999.532448462" watchObservedRunningTime="2026-02-24 00:22:15.543281105 +0000 UTC m=+999.532905564" Feb 24 00:22:16 crc kubenswrapper[4824]: I0224 00:22:16.941388 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q2g4m" Feb 24 00:22:16 crc kubenswrapper[4824]: I0224 00:22:16.941950 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q2g4m" Feb 24 00:22:17 crc kubenswrapper[4824]: I0224 00:22:17.995626 4824 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q2g4m" podUID="5e34f5eb-6bdc-406e-aaed-ff979c64f6db" containerName="registry-server" probeResult="failure" output=< Feb 24 00:22:17 crc kubenswrapper[4824]: timeout: failed to connect service ":50051" within 1s Feb 24 00:22:17 crc kubenswrapper[4824]: > Feb 24 00:22:19 crc kubenswrapper[4824]: I0224 00:22:19.716189 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lffjr" Feb 24 00:22:19 crc kubenswrapper[4824]: I0224 00:22:19.717027 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lffjr" Feb 24 00:22:19 crc kubenswrapper[4824]: I0224 00:22:19.756545 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lffjr" Feb 24 00:22:20 crc kubenswrapper[4824]: I0224 00:22:20.595006 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lffjr" Feb 24 00:22:20 crc kubenswrapper[4824]: I0224 00:22:20.654125 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lffjr"] Feb 24 00:22:22 crc kubenswrapper[4824]: I0224 00:22:22.576252 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lffjr" podUID="5daf2179-5386-4221-a15d-0e9787959357" containerName="registry-server" containerID="cri-o://41227a05a63831cc63a8bba2e9c77df4d85286d5e66dfda245d5fa542a2ef56c" gracePeriod=2 Feb 24 00:22:22 crc kubenswrapper[4824]: I0224 00:22:22.727123 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lzf7f"] Feb 24 00:22:22 crc kubenswrapper[4824]: I0224 00:22:22.735042 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lzf7f" Feb 24 00:22:22 crc kubenswrapper[4824]: I0224 00:22:22.746405 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lzf7f"] Feb 24 00:22:22 crc kubenswrapper[4824]: I0224 00:22:22.755565 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ad9454d-0615-4303-a59f-fbcc1d18c56a-catalog-content\") pod \"certified-operators-lzf7f\" (UID: \"8ad9454d-0615-4303-a59f-fbcc1d18c56a\") " pod="openshift-marketplace/certified-operators-lzf7f" Feb 24 00:22:22 crc kubenswrapper[4824]: I0224 00:22:22.755673 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ad9454d-0615-4303-a59f-fbcc1d18c56a-utilities\") pod \"certified-operators-lzf7f\" (UID: \"8ad9454d-0615-4303-a59f-fbcc1d18c56a\") " pod="openshift-marketplace/certified-operators-lzf7f" Feb 24 00:22:22 crc kubenswrapper[4824]: I0224 00:22:22.755788 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drcs5\" (UniqueName: \"kubernetes.io/projected/8ad9454d-0615-4303-a59f-fbcc1d18c56a-kube-api-access-drcs5\") pod \"certified-operators-lzf7f\" (UID: \"8ad9454d-0615-4303-a59f-fbcc1d18c56a\") " pod="openshift-marketplace/certified-operators-lzf7f" Feb 24 00:22:22 crc kubenswrapper[4824]: I0224 00:22:22.857637 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ad9454d-0615-4303-a59f-fbcc1d18c56a-utilities\") pod \"certified-operators-lzf7f\" (UID: \"8ad9454d-0615-4303-a59f-fbcc1d18c56a\") " pod="openshift-marketplace/certified-operators-lzf7f" Feb 24 00:22:22 crc kubenswrapper[4824]: I0224 00:22:22.857745 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drcs5\" (UniqueName: \"kubernetes.io/projected/8ad9454d-0615-4303-a59f-fbcc1d18c56a-kube-api-access-drcs5\") pod \"certified-operators-lzf7f\" (UID: \"8ad9454d-0615-4303-a59f-fbcc1d18c56a\") " pod="openshift-marketplace/certified-operators-lzf7f" Feb 24 00:22:22 crc kubenswrapper[4824]: I0224 00:22:22.857781 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ad9454d-0615-4303-a59f-fbcc1d18c56a-catalog-content\") pod \"certified-operators-lzf7f\" (UID: \"8ad9454d-0615-4303-a59f-fbcc1d18c56a\") " pod="openshift-marketplace/certified-operators-lzf7f" Feb 24 00:22:22 crc kubenswrapper[4824]: I0224 00:22:22.858387 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ad9454d-0615-4303-a59f-fbcc1d18c56a-utilities\") pod \"certified-operators-lzf7f\" (UID: \"8ad9454d-0615-4303-a59f-fbcc1d18c56a\") " pod="openshift-marketplace/certified-operators-lzf7f" Feb 24 00:22:22 crc kubenswrapper[4824]: I0224 00:22:22.858434 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ad9454d-0615-4303-a59f-fbcc1d18c56a-catalog-content\") pod \"certified-operators-lzf7f\" (UID: \"8ad9454d-0615-4303-a59f-fbcc1d18c56a\") " pod="openshift-marketplace/certified-operators-lzf7f" Feb 24 00:22:22 crc kubenswrapper[4824]: I0224 00:22:22.884375 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drcs5\" (UniqueName: \"kubernetes.io/projected/8ad9454d-0615-4303-a59f-fbcc1d18c56a-kube-api-access-drcs5\") pod \"certified-operators-lzf7f\" (UID: \"8ad9454d-0615-4303-a59f-fbcc1d18c56a\") " pod="openshift-marketplace/certified-operators-lzf7f" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.062890 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lzf7f" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.275566 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.276138 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.338270 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lffjr" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.377610 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5daf2179-5386-4221-a15d-0e9787959357-utilities\") pod \"5daf2179-5386-4221-a15d-0e9787959357\" (UID: \"5daf2179-5386-4221-a15d-0e9787959357\") " Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.379741 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5daf2179-5386-4221-a15d-0e9787959357-utilities" (OuterVolumeSpecName: "utilities") pod "5daf2179-5386-4221-a15d-0e9787959357" (UID: "5daf2179-5386-4221-a15d-0e9787959357"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.380188 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5daf2179-5386-4221-a15d-0e9787959357-catalog-content\") pod \"5daf2179-5386-4221-a15d-0e9787959357\" (UID: \"5daf2179-5386-4221-a15d-0e9787959357\") " Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.380283 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4vxp\" (UniqueName: \"kubernetes.io/projected/5daf2179-5386-4221-a15d-0e9787959357-kube-api-access-z4vxp\") pod \"5daf2179-5386-4221-a15d-0e9787959357\" (UID: \"5daf2179-5386-4221-a15d-0e9787959357\") " Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.380724 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5daf2179-5386-4221-a15d-0e9787959357-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.391410 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5daf2179-5386-4221-a15d-0e9787959357-kube-api-access-z4vxp" (OuterVolumeSpecName: "kube-api-access-z4vxp") pod "5daf2179-5386-4221-a15d-0e9787959357" (UID: "5daf2179-5386-4221-a15d-0e9787959357"). InnerVolumeSpecName "kube-api-access-z4vxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.460098 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5daf2179-5386-4221-a15d-0e9787959357-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5daf2179-5386-4221-a15d-0e9787959357" (UID: "5daf2179-5386-4221-a15d-0e9787959357"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.482499 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5daf2179-5386-4221-a15d-0e9787959357-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.482585 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4vxp\" (UniqueName: \"kubernetes.io/projected/5daf2179-5386-4221-a15d-0e9787959357-kube-api-access-z4vxp\") on node \"crc\" DevicePath \"\"" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.586647 4824 generic.go:334] "Generic (PLEG): container finished" podID="5daf2179-5386-4221-a15d-0e9787959357" containerID="41227a05a63831cc63a8bba2e9c77df4d85286d5e66dfda245d5fa542a2ef56c" exitCode=0 Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.586714 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lffjr" event={"ID":"5daf2179-5386-4221-a15d-0e9787959357","Type":"ContainerDied","Data":"41227a05a63831cc63a8bba2e9c77df4d85286d5e66dfda245d5fa542a2ef56c"} Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.586750 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lffjr" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.586783 4824 scope.go:117] "RemoveContainer" containerID="41227a05a63831cc63a8bba2e9c77df4d85286d5e66dfda245d5fa542a2ef56c" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.586766 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lffjr" event={"ID":"5daf2179-5386-4221-a15d-0e9787959357","Type":"ContainerDied","Data":"fd0d534e6cc3fbc4eb96094ad2d6f52ebe74dd8f3e98453192847a75f0ffa856"} Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.607508 4824 scope.go:117] "RemoveContainer" containerID="e9f8f2ac56a9a9220e0f2fcf3de55cdda8261bdd3eb977d2fca357bf9c03b9e0" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.623013 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lffjr"] Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.632314 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lffjr"] Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.640906 4824 scope.go:117] "RemoveContainer" containerID="4323a297be2acca3f04764d788a2723c01de5977be24510091e6dae18368cf46" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.642564 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lzf7f"] Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.661352 4824 scope.go:117] "RemoveContainer" containerID="41227a05a63831cc63a8bba2e9c77df4d85286d5e66dfda245d5fa542a2ef56c" Feb 24 00:22:23 crc kubenswrapper[4824]: E0224 00:22:23.661817 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41227a05a63831cc63a8bba2e9c77df4d85286d5e66dfda245d5fa542a2ef56c\": container with ID starting with 41227a05a63831cc63a8bba2e9c77df4d85286d5e66dfda245d5fa542a2ef56c not found: ID does not exist" containerID="41227a05a63831cc63a8bba2e9c77df4d85286d5e66dfda245d5fa542a2ef56c" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.661887 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41227a05a63831cc63a8bba2e9c77df4d85286d5e66dfda245d5fa542a2ef56c"} err="failed to get container status \"41227a05a63831cc63a8bba2e9c77df4d85286d5e66dfda245d5fa542a2ef56c\": rpc error: code = NotFound desc = could not find container \"41227a05a63831cc63a8bba2e9c77df4d85286d5e66dfda245d5fa542a2ef56c\": container with ID starting with 41227a05a63831cc63a8bba2e9c77df4d85286d5e66dfda245d5fa542a2ef56c not found: ID does not exist" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.661940 4824 scope.go:117] "RemoveContainer" containerID="e9f8f2ac56a9a9220e0f2fcf3de55cdda8261bdd3eb977d2fca357bf9c03b9e0" Feb 24 00:22:23 crc kubenswrapper[4824]: E0224 00:22:23.665802 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9f8f2ac56a9a9220e0f2fcf3de55cdda8261bdd3eb977d2fca357bf9c03b9e0\": container with ID starting with e9f8f2ac56a9a9220e0f2fcf3de55cdda8261bdd3eb977d2fca357bf9c03b9e0 not found: ID does not exist" containerID="e9f8f2ac56a9a9220e0f2fcf3de55cdda8261bdd3eb977d2fca357bf9c03b9e0" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.665846 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9f8f2ac56a9a9220e0f2fcf3de55cdda8261bdd3eb977d2fca357bf9c03b9e0"} err="failed to get container status \"e9f8f2ac56a9a9220e0f2fcf3de55cdda8261bdd3eb977d2fca357bf9c03b9e0\": rpc error: code = NotFound desc = could not find container \"e9f8f2ac56a9a9220e0f2fcf3de55cdda8261bdd3eb977d2fca357bf9c03b9e0\": container with ID starting with e9f8f2ac56a9a9220e0f2fcf3de55cdda8261bdd3eb977d2fca357bf9c03b9e0 not found: ID does not exist" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.665874 4824 scope.go:117] "RemoveContainer" containerID="4323a297be2acca3f04764d788a2723c01de5977be24510091e6dae18368cf46" Feb 24 00:22:23 crc kubenswrapper[4824]: E0224 00:22:23.666093 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4323a297be2acca3f04764d788a2723c01de5977be24510091e6dae18368cf46\": container with ID starting with 4323a297be2acca3f04764d788a2723c01de5977be24510091e6dae18368cf46 not found: ID does not exist" containerID="4323a297be2acca3f04764d788a2723c01de5977be24510091e6dae18368cf46" Feb 24 00:22:23 crc kubenswrapper[4824]: I0224 00:22:23.666119 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4323a297be2acca3f04764d788a2723c01de5977be24510091e6dae18368cf46"} err="failed to get container status \"4323a297be2acca3f04764d788a2723c01de5977be24510091e6dae18368cf46\": rpc error: code = NotFound desc = could not find container \"4323a297be2acca3f04764d788a2723c01de5977be24510091e6dae18368cf46\": container with ID starting with 4323a297be2acca3f04764d788a2723c01de5977be24510091e6dae18368cf46 not found: ID does not exist" Feb 24 00:22:24 crc kubenswrapper[4824]: I0224 00:22:24.598732 4824 generic.go:334] "Generic (PLEG): container finished" podID="8ad9454d-0615-4303-a59f-fbcc1d18c56a" containerID="1092c6a7e447e77503d91c65c3c5f6b11d0fed499e29b15c1ce9420c3b4bd460" exitCode=0 Feb 24 00:22:24 crc kubenswrapper[4824]: I0224 00:22:24.598793 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzf7f" event={"ID":"8ad9454d-0615-4303-a59f-fbcc1d18c56a","Type":"ContainerDied","Data":"1092c6a7e447e77503d91c65c3c5f6b11d0fed499e29b15c1ce9420c3b4bd460"} Feb 24 00:22:24 crc kubenswrapper[4824]: I0224 00:22:24.598823 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzf7f" event={"ID":"8ad9454d-0615-4303-a59f-fbcc1d18c56a","Type":"ContainerStarted","Data":"f8e29f08a92667cb46b4ad4c4f5b0ccad1357b1ed5c00229f2037079821e8657"} Feb 24 00:22:24 crc kubenswrapper[4824]: I0224 00:22:24.703684 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5daf2179-5386-4221-a15d-0e9787959357" path="/var/lib/kubelet/pods/5daf2179-5386-4221-a15d-0e9787959357/volumes" Feb 24 00:22:25 crc kubenswrapper[4824]: I0224 00:22:25.611925 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzf7f" event={"ID":"8ad9454d-0615-4303-a59f-fbcc1d18c56a","Type":"ContainerStarted","Data":"8ff769ada3b3ef2e8697e389c71973fa6f1d0e76779e9a02a3afdec95938f929"} Feb 24 00:22:26 crc kubenswrapper[4824]: I0224 00:22:26.622006 4824 generic.go:334] "Generic (PLEG): container finished" podID="8ad9454d-0615-4303-a59f-fbcc1d18c56a" containerID="8ff769ada3b3ef2e8697e389c71973fa6f1d0e76779e9a02a3afdec95938f929" exitCode=0 Feb 24 00:22:26 crc kubenswrapper[4824]: I0224 00:22:26.622046 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzf7f" event={"ID":"8ad9454d-0615-4303-a59f-fbcc1d18c56a","Type":"ContainerDied","Data":"8ff769ada3b3ef2e8697e389c71973fa6f1d0e76779e9a02a3afdec95938f929"} Feb 24 00:22:27 crc kubenswrapper[4824]: I0224 00:22:27.047058 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q2g4m" Feb 24 00:22:27 crc kubenswrapper[4824]: I0224 00:22:27.102207 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q2g4m" Feb 24 00:22:27 crc kubenswrapper[4824]: I0224 00:22:27.631072 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzf7f" event={"ID":"8ad9454d-0615-4303-a59f-fbcc1d18c56a","Type":"ContainerStarted","Data":"1c3d6d8ef243ea2dde76009ae12999ae54312654d0dda131ae174f8dbc2faffa"} Feb 24 00:22:27 crc kubenswrapper[4824]: I0224 00:22:27.650819 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lzf7f" podStartSLOduration=3.210130608 podStartE2EDuration="5.650799192s" podCreationTimestamp="2026-02-24 00:22:22 +0000 UTC" firstStartedPulling="2026-02-24 00:22:24.600730401 +0000 UTC m=+1008.590354870" lastFinishedPulling="2026-02-24 00:22:27.041398985 +0000 UTC m=+1011.031023454" observedRunningTime="2026-02-24 00:22:27.649740655 +0000 UTC m=+1011.639365144" watchObservedRunningTime="2026-02-24 00:22:27.650799192 +0000 UTC m=+1011.640423681" Feb 24 00:22:29 crc kubenswrapper[4824]: I0224 00:22:29.595735 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q2g4m"] Feb 24 00:22:29 crc kubenswrapper[4824]: I0224 00:22:29.596486 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q2g4m" podUID="5e34f5eb-6bdc-406e-aaed-ff979c64f6db" containerName="registry-server" containerID="cri-o://831c0b295bbcc0eee583b3247a887cdd5a9f7abc2fafc098ba177ebb72c41c62" gracePeriod=2 Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.496048 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2g4m" Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.596893 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e34f5eb-6bdc-406e-aaed-ff979c64f6db-utilities\") pod \"5e34f5eb-6bdc-406e-aaed-ff979c64f6db\" (UID: \"5e34f5eb-6bdc-406e-aaed-ff979c64f6db\") " Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.597041 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq7jv\" (UniqueName: \"kubernetes.io/projected/5e34f5eb-6bdc-406e-aaed-ff979c64f6db-kube-api-access-lq7jv\") pod \"5e34f5eb-6bdc-406e-aaed-ff979c64f6db\" (UID: \"5e34f5eb-6bdc-406e-aaed-ff979c64f6db\") " Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.597083 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e34f5eb-6bdc-406e-aaed-ff979c64f6db-catalog-content\") pod \"5e34f5eb-6bdc-406e-aaed-ff979c64f6db\" (UID: \"5e34f5eb-6bdc-406e-aaed-ff979c64f6db\") " Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.597801 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e34f5eb-6bdc-406e-aaed-ff979c64f6db-utilities" (OuterVolumeSpecName: "utilities") pod "5e34f5eb-6bdc-406e-aaed-ff979c64f6db" (UID: "5e34f5eb-6bdc-406e-aaed-ff979c64f6db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.604012 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e34f5eb-6bdc-406e-aaed-ff979c64f6db-kube-api-access-lq7jv" (OuterVolumeSpecName: "kube-api-access-lq7jv") pod "5e34f5eb-6bdc-406e-aaed-ff979c64f6db" (UID: "5e34f5eb-6bdc-406e-aaed-ff979c64f6db"). InnerVolumeSpecName "kube-api-access-lq7jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.650058 4824 generic.go:334] "Generic (PLEG): container finished" podID="5e34f5eb-6bdc-406e-aaed-ff979c64f6db" containerID="831c0b295bbcc0eee583b3247a887cdd5a9f7abc2fafc098ba177ebb72c41c62" exitCode=0 Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.650111 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2g4m" event={"ID":"5e34f5eb-6bdc-406e-aaed-ff979c64f6db","Type":"ContainerDied","Data":"831c0b295bbcc0eee583b3247a887cdd5a9f7abc2fafc098ba177ebb72c41c62"} Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.650144 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2g4m" event={"ID":"5e34f5eb-6bdc-406e-aaed-ff979c64f6db","Type":"ContainerDied","Data":"c154e571981648f7115b3e1d513a6874710dbdb0e4c9bd37e0e746f9c26709ae"} Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.650165 4824 scope.go:117] "RemoveContainer" containerID="831c0b295bbcc0eee583b3247a887cdd5a9f7abc2fafc098ba177ebb72c41c62" Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.650291 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2g4m" Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.668198 4824 scope.go:117] "RemoveContainer" containerID="5bd458f8ab6e18655e507e975fc2e2718159f8ffd4e3a3ea392e57cab31af1b7" Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.698133 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e34f5eb-6bdc-406e-aaed-ff979c64f6db-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.698165 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq7jv\" (UniqueName: \"kubernetes.io/projected/5e34f5eb-6bdc-406e-aaed-ff979c64f6db-kube-api-access-lq7jv\") on node \"crc\" DevicePath \"\"" Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.705947 4824 scope.go:117] "RemoveContainer" containerID="465fc0bdabcaaeddc87638959d003ee0489fecdf8b29583f92b701894d90d1f2" Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.723180 4824 scope.go:117] "RemoveContainer" containerID="831c0b295bbcc0eee583b3247a887cdd5a9f7abc2fafc098ba177ebb72c41c62" Feb 24 00:22:30 crc kubenswrapper[4824]: E0224 00:22:30.723744 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"831c0b295bbcc0eee583b3247a887cdd5a9f7abc2fafc098ba177ebb72c41c62\": container with ID starting with 831c0b295bbcc0eee583b3247a887cdd5a9f7abc2fafc098ba177ebb72c41c62 not found: ID does not exist" containerID="831c0b295bbcc0eee583b3247a887cdd5a9f7abc2fafc098ba177ebb72c41c62" Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.723793 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"831c0b295bbcc0eee583b3247a887cdd5a9f7abc2fafc098ba177ebb72c41c62"} err="failed to get container status \"831c0b295bbcc0eee583b3247a887cdd5a9f7abc2fafc098ba177ebb72c41c62\": rpc error: code = NotFound desc = could not find container \"831c0b295bbcc0eee583b3247a887cdd5a9f7abc2fafc098ba177ebb72c41c62\": container with ID starting with 831c0b295bbcc0eee583b3247a887cdd5a9f7abc2fafc098ba177ebb72c41c62 not found: ID does not exist" Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.723819 4824 scope.go:117] "RemoveContainer" containerID="5bd458f8ab6e18655e507e975fc2e2718159f8ffd4e3a3ea392e57cab31af1b7" Feb 24 00:22:30 crc kubenswrapper[4824]: E0224 00:22:30.724156 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bd458f8ab6e18655e507e975fc2e2718159f8ffd4e3a3ea392e57cab31af1b7\": container with ID starting with 5bd458f8ab6e18655e507e975fc2e2718159f8ffd4e3a3ea392e57cab31af1b7 not found: ID does not exist" containerID="5bd458f8ab6e18655e507e975fc2e2718159f8ffd4e3a3ea392e57cab31af1b7" Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.724239 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bd458f8ab6e18655e507e975fc2e2718159f8ffd4e3a3ea392e57cab31af1b7"} err="failed to get container status \"5bd458f8ab6e18655e507e975fc2e2718159f8ffd4e3a3ea392e57cab31af1b7\": rpc error: code = NotFound desc = could not find container \"5bd458f8ab6e18655e507e975fc2e2718159f8ffd4e3a3ea392e57cab31af1b7\": container with ID starting with 5bd458f8ab6e18655e507e975fc2e2718159f8ffd4e3a3ea392e57cab31af1b7 not found: ID does not exist" Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.724280 4824 scope.go:117] "RemoveContainer" containerID="465fc0bdabcaaeddc87638959d003ee0489fecdf8b29583f92b701894d90d1f2" Feb 24 00:22:30 crc kubenswrapper[4824]: E0224 00:22:30.724995 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"465fc0bdabcaaeddc87638959d003ee0489fecdf8b29583f92b701894d90d1f2\": container with ID starting with 465fc0bdabcaaeddc87638959d003ee0489fecdf8b29583f92b701894d90d1f2 not found: ID does not exist" containerID="465fc0bdabcaaeddc87638959d003ee0489fecdf8b29583f92b701894d90d1f2" Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.725062 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"465fc0bdabcaaeddc87638959d003ee0489fecdf8b29583f92b701894d90d1f2"} err="failed to get container status \"465fc0bdabcaaeddc87638959d003ee0489fecdf8b29583f92b701894d90d1f2\": rpc error: code = NotFound desc = could not find container \"465fc0bdabcaaeddc87638959d003ee0489fecdf8b29583f92b701894d90d1f2\": container with ID starting with 465fc0bdabcaaeddc87638959d003ee0489fecdf8b29583f92b701894d90d1f2 not found: ID does not exist" Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.726281 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e34f5eb-6bdc-406e-aaed-ff979c64f6db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e34f5eb-6bdc-406e-aaed-ff979c64f6db" (UID: "5e34f5eb-6bdc-406e-aaed-ff979c64f6db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.799949 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e34f5eb-6bdc-406e-aaed-ff979c64f6db-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.983085 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q2g4m"] Feb 24 00:22:30 crc kubenswrapper[4824]: I0224 00:22:30.992389 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q2g4m"] Feb 24 00:22:32 crc kubenswrapper[4824]: I0224 00:22:32.707901 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e34f5eb-6bdc-406e-aaed-ff979c64f6db" path="/var/lib/kubelet/pods/5e34f5eb-6bdc-406e-aaed-ff979c64f6db/volumes" Feb 24 00:22:33 crc kubenswrapper[4824]: I0224 00:22:33.064332 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lzf7f" Feb 24 00:22:33 crc kubenswrapper[4824]: I0224 00:22:33.065282 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lzf7f" Feb 24 00:22:33 crc kubenswrapper[4824]: I0224 00:22:33.138598 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lzf7f" Feb 24 00:22:33 crc kubenswrapper[4824]: I0224 00:22:33.734560 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lzf7f" Feb 24 00:22:34 crc kubenswrapper[4824]: I0224 00:22:34.790730 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lzf7f"] Feb 24 00:22:36 crc kubenswrapper[4824]: I0224 00:22:36.701452 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lzf7f" podUID="8ad9454d-0615-4303-a59f-fbcc1d18c56a" containerName="registry-server" containerID="cri-o://1c3d6d8ef243ea2dde76009ae12999ae54312654d0dda131ae174f8dbc2faffa" gracePeriod=2 Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.102559 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lzf7f" Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.198741 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ad9454d-0615-4303-a59f-fbcc1d18c56a-utilities\") pod \"8ad9454d-0615-4303-a59f-fbcc1d18c56a\" (UID: \"8ad9454d-0615-4303-a59f-fbcc1d18c56a\") " Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.198838 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ad9454d-0615-4303-a59f-fbcc1d18c56a-catalog-content\") pod \"8ad9454d-0615-4303-a59f-fbcc1d18c56a\" (UID: \"8ad9454d-0615-4303-a59f-fbcc1d18c56a\") " Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.198874 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drcs5\" (UniqueName: \"kubernetes.io/projected/8ad9454d-0615-4303-a59f-fbcc1d18c56a-kube-api-access-drcs5\") pod \"8ad9454d-0615-4303-a59f-fbcc1d18c56a\" (UID: \"8ad9454d-0615-4303-a59f-fbcc1d18c56a\") " Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.200077 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ad9454d-0615-4303-a59f-fbcc1d18c56a-utilities" (OuterVolumeSpecName: "utilities") pod "8ad9454d-0615-4303-a59f-fbcc1d18c56a" (UID: "8ad9454d-0615-4303-a59f-fbcc1d18c56a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.209501 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ad9454d-0615-4303-a59f-fbcc1d18c56a-kube-api-access-drcs5" (OuterVolumeSpecName: "kube-api-access-drcs5") pod "8ad9454d-0615-4303-a59f-fbcc1d18c56a" (UID: "8ad9454d-0615-4303-a59f-fbcc1d18c56a"). InnerVolumeSpecName "kube-api-access-drcs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.253058 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ad9454d-0615-4303-a59f-fbcc1d18c56a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ad9454d-0615-4303-a59f-fbcc1d18c56a" (UID: "8ad9454d-0615-4303-a59f-fbcc1d18c56a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.300851 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ad9454d-0615-4303-a59f-fbcc1d18c56a-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.300888 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ad9454d-0615-4303-a59f-fbcc1d18c56a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.300901 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drcs5\" (UniqueName: \"kubernetes.io/projected/8ad9454d-0615-4303-a59f-fbcc1d18c56a-kube-api-access-drcs5\") on node \"crc\" DevicePath \"\"" Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.728724 4824 generic.go:334] "Generic (PLEG): container finished" podID="8ad9454d-0615-4303-a59f-fbcc1d18c56a" containerID="1c3d6d8ef243ea2dde76009ae12999ae54312654d0dda131ae174f8dbc2faffa" exitCode=0 Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.728813 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzf7f" event={"ID":"8ad9454d-0615-4303-a59f-fbcc1d18c56a","Type":"ContainerDied","Data":"1c3d6d8ef243ea2dde76009ae12999ae54312654d0dda131ae174f8dbc2faffa"} Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.728867 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lzf7f" Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.728906 4824 scope.go:117] "RemoveContainer" containerID="1c3d6d8ef243ea2dde76009ae12999ae54312654d0dda131ae174f8dbc2faffa" Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.728882 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzf7f" event={"ID":"8ad9454d-0615-4303-a59f-fbcc1d18c56a","Type":"ContainerDied","Data":"f8e29f08a92667cb46b4ad4c4f5b0ccad1357b1ed5c00229f2037079821e8657"} Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.765828 4824 scope.go:117] "RemoveContainer" containerID="8ff769ada3b3ef2e8697e389c71973fa6f1d0e76779e9a02a3afdec95938f929" Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.805872 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lzf7f"] Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.808901 4824 scope.go:117] "RemoveContainer" containerID="1092c6a7e447e77503d91c65c3c5f6b11d0fed499e29b15c1ce9420c3b4bd460" Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.812275 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lzf7f"] Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.829011 4824 scope.go:117] "RemoveContainer" containerID="1c3d6d8ef243ea2dde76009ae12999ae54312654d0dda131ae174f8dbc2faffa" Feb 24 00:22:37 crc kubenswrapper[4824]: E0224 00:22:37.831122 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c3d6d8ef243ea2dde76009ae12999ae54312654d0dda131ae174f8dbc2faffa\": container with ID starting with 1c3d6d8ef243ea2dde76009ae12999ae54312654d0dda131ae174f8dbc2faffa not found: ID does not exist" containerID="1c3d6d8ef243ea2dde76009ae12999ae54312654d0dda131ae174f8dbc2faffa" Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.831172 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c3d6d8ef243ea2dde76009ae12999ae54312654d0dda131ae174f8dbc2faffa"} err="failed to get container status \"1c3d6d8ef243ea2dde76009ae12999ae54312654d0dda131ae174f8dbc2faffa\": rpc error: code = NotFound desc = could not find container \"1c3d6d8ef243ea2dde76009ae12999ae54312654d0dda131ae174f8dbc2faffa\": container with ID starting with 1c3d6d8ef243ea2dde76009ae12999ae54312654d0dda131ae174f8dbc2faffa not found: ID does not exist" Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.831203 4824 scope.go:117] "RemoveContainer" containerID="8ff769ada3b3ef2e8697e389c71973fa6f1d0e76779e9a02a3afdec95938f929" Feb 24 00:22:37 crc kubenswrapper[4824]: E0224 00:22:37.833813 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ff769ada3b3ef2e8697e389c71973fa6f1d0e76779e9a02a3afdec95938f929\": container with ID starting with 8ff769ada3b3ef2e8697e389c71973fa6f1d0e76779e9a02a3afdec95938f929 not found: ID does not exist" containerID="8ff769ada3b3ef2e8697e389c71973fa6f1d0e76779e9a02a3afdec95938f929" Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.833840 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ff769ada3b3ef2e8697e389c71973fa6f1d0e76779e9a02a3afdec95938f929"} err="failed to get container status \"8ff769ada3b3ef2e8697e389c71973fa6f1d0e76779e9a02a3afdec95938f929\": rpc error: code = NotFound desc = could not find container \"8ff769ada3b3ef2e8697e389c71973fa6f1d0e76779e9a02a3afdec95938f929\": container with ID starting with 8ff769ada3b3ef2e8697e389c71973fa6f1d0e76779e9a02a3afdec95938f929 not found: ID does not exist" Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.833856 4824 scope.go:117] "RemoveContainer" containerID="1092c6a7e447e77503d91c65c3c5f6b11d0fed499e29b15c1ce9420c3b4bd460" Feb 24 00:22:37 crc kubenswrapper[4824]: E0224 00:22:37.834126 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1092c6a7e447e77503d91c65c3c5f6b11d0fed499e29b15c1ce9420c3b4bd460\": container with ID starting with 1092c6a7e447e77503d91c65c3c5f6b11d0fed499e29b15c1ce9420c3b4bd460 not found: ID does not exist" containerID="1092c6a7e447e77503d91c65c3c5f6b11d0fed499e29b15c1ce9420c3b4bd460" Feb 24 00:22:37 crc kubenswrapper[4824]: I0224 00:22:37.834176 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1092c6a7e447e77503d91c65c3c5f6b11d0fed499e29b15c1ce9420c3b4bd460"} err="failed to get container status \"1092c6a7e447e77503d91c65c3c5f6b11d0fed499e29b15c1ce9420c3b4bd460\": rpc error: code = NotFound desc = could not find container \"1092c6a7e447e77503d91c65c3c5f6b11d0fed499e29b15c1ce9420c3b4bd460\": container with ID starting with 1092c6a7e447e77503d91c65c3c5f6b11d0fed499e29b15c1ce9420c3b4bd460 not found: ID does not exist" Feb 24 00:22:38 crc kubenswrapper[4824]: I0224 00:22:38.706789 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ad9454d-0615-4303-a59f-fbcc1d18c56a" path="/var/lib/kubelet/pods/8ad9454d-0615-4303-a59f-fbcc1d18c56a/volumes" Feb 24 00:22:53 crc kubenswrapper[4824]: I0224 00:22:53.276486 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:22:53 crc kubenswrapper[4824]: I0224 00:22:53.277386 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:22:53 crc kubenswrapper[4824]: I0224 00:22:53.277448 4824 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:22:53 crc kubenswrapper[4824]: I0224 00:22:53.278264 4824 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"960af4768f01705e18d424766fe08bb2ebb2088d821d2cb697f31ab6e24ccd28"} pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:22:53 crc kubenswrapper[4824]: I0224 00:22:53.278330 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" containerID="cri-o://960af4768f01705e18d424766fe08bb2ebb2088d821d2cb697f31ab6e24ccd28" gracePeriod=600 Feb 24 00:22:53 crc kubenswrapper[4824]: I0224 00:22:53.845979 4824 generic.go:334] "Generic (PLEG): container finished" podID="939ca085-9383-42e6-b7d6-37f101137273" containerID="960af4768f01705e18d424766fe08bb2ebb2088d821d2cb697f31ab6e24ccd28" exitCode=0 Feb 24 00:22:53 crc kubenswrapper[4824]: I0224 00:22:53.846070 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerDied","Data":"960af4768f01705e18d424766fe08bb2ebb2088d821d2cb697f31ab6e24ccd28"} Feb 24 00:22:53 crc kubenswrapper[4824]: I0224 00:22:53.846441 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerStarted","Data":"32f31702ad77be87a49a0e3d023914422f1fbe192b728a29ed31dacaa99cc4eb"} Feb 24 00:22:53 crc kubenswrapper[4824]: I0224 00:22:53.846465 4824 scope.go:117] "RemoveContainer" containerID="43fc5998f7ab77a1ca73519cb6a4280f5869d3a50153e1dc6202d26bc4d9b6a3" Feb 24 00:23:11 crc kubenswrapper[4824]: I0224 00:23:11.987110 4824 generic.go:334] "Generic (PLEG): container finished" podID="87c4f157-f66c-485a-b29d-db482b59c2a1" containerID="9aa60fad0c5e5b125c85a695ee9699aa7edfa41691acaf3323a949446eac0c81" exitCode=0 Feb 24 00:23:11 crc kubenswrapper[4824]: I0224 00:23:11.987175 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"87c4f157-f66c-485a-b29d-db482b59c2a1","Type":"ContainerDied","Data":"9aa60fad0c5e5b125c85a695ee9699aa7edfa41691acaf3323a949446eac0c81"} Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.300712 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.346232 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/87c4f157-f66c-485a-b29d-db482b59c2a1-builder-dockercfg-gqxxs-push\") pod \"87c4f157-f66c-485a-b29d-db482b59c2a1\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.346309 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87c4f157-f66c-485a-b29d-db482b59c2a1-build-proxy-ca-bundles\") pod \"87c4f157-f66c-485a-b29d-db482b59c2a1\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.346355 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87c4f157-f66c-485a-b29d-db482b59c2a1-build-ca-bundles\") pod \"87c4f157-f66c-485a-b29d-db482b59c2a1\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.346411 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-buildworkdir\") pod \"87c4f157-f66c-485a-b29d-db482b59c2a1\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.346485 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-build-blob-cache\") pod \"87c4f157-f66c-485a-b29d-db482b59c2a1\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.346501 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/87c4f157-f66c-485a-b29d-db482b59c2a1-buildcachedir\") pod \"87c4f157-f66c-485a-b29d-db482b59c2a1\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.346541 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/87c4f157-f66c-485a-b29d-db482b59c2a1-builder-dockercfg-gqxxs-pull\") pod \"87c4f157-f66c-485a-b29d-db482b59c2a1\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.346582 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/87c4f157-f66c-485a-b29d-db482b59c2a1-build-system-configs\") pod \"87c4f157-f66c-485a-b29d-db482b59c2a1\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.346605 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-container-storage-run\") pod \"87c4f157-f66c-485a-b29d-db482b59c2a1\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.346645 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49jr7\" (UniqueName: \"kubernetes.io/projected/87c4f157-f66c-485a-b29d-db482b59c2a1-kube-api-access-49jr7\") pod \"87c4f157-f66c-485a-b29d-db482b59c2a1\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.346702 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/87c4f157-f66c-485a-b29d-db482b59c2a1-node-pullsecrets\") pod \"87c4f157-f66c-485a-b29d-db482b59c2a1\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.346729 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-container-storage-root\") pod \"87c4f157-f66c-485a-b29d-db482b59c2a1\" (UID: \"87c4f157-f66c-485a-b29d-db482b59c2a1\") " Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.346809 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87c4f157-f66c-485a-b29d-db482b59c2a1-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "87c4f157-f66c-485a-b29d-db482b59c2a1" (UID: "87c4f157-f66c-485a-b29d-db482b59c2a1"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.347056 4824 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/87c4f157-f66c-485a-b29d-db482b59c2a1-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.347511 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87c4f157-f66c-485a-b29d-db482b59c2a1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "87c4f157-f66c-485a-b29d-db482b59c2a1" (UID: "87c4f157-f66c-485a-b29d-db482b59c2a1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.347545 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87c4f157-f66c-485a-b29d-db482b59c2a1-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "87c4f157-f66c-485a-b29d-db482b59c2a1" (UID: "87c4f157-f66c-485a-b29d-db482b59c2a1"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.348338 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87c4f157-f66c-485a-b29d-db482b59c2a1-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "87c4f157-f66c-485a-b29d-db482b59c2a1" (UID: "87c4f157-f66c-485a-b29d-db482b59c2a1"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.348941 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "87c4f157-f66c-485a-b29d-db482b59c2a1" (UID: "87c4f157-f66c-485a-b29d-db482b59c2a1"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.352437 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87c4f157-f66c-485a-b29d-db482b59c2a1-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "87c4f157-f66c-485a-b29d-db482b59c2a1" (UID: "87c4f157-f66c-485a-b29d-db482b59c2a1"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.357849 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "87c4f157-f66c-485a-b29d-db482b59c2a1" (UID: "87c4f157-f66c-485a-b29d-db482b59c2a1"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.365625 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87c4f157-f66c-485a-b29d-db482b59c2a1-kube-api-access-49jr7" (OuterVolumeSpecName: "kube-api-access-49jr7") pod "87c4f157-f66c-485a-b29d-db482b59c2a1" (UID: "87c4f157-f66c-485a-b29d-db482b59c2a1"). InnerVolumeSpecName "kube-api-access-49jr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.369975 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c4f157-f66c-485a-b29d-db482b59c2a1-builder-dockercfg-gqxxs-pull" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-pull") pod "87c4f157-f66c-485a-b29d-db482b59c2a1" (UID: "87c4f157-f66c-485a-b29d-db482b59c2a1"). InnerVolumeSpecName "builder-dockercfg-gqxxs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.375491 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87c4f157-f66c-485a-b29d-db482b59c2a1-builder-dockercfg-gqxxs-push" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-push") pod "87c4f157-f66c-485a-b29d-db482b59c2a1" (UID: "87c4f157-f66c-485a-b29d-db482b59c2a1"). InnerVolumeSpecName "builder-dockercfg-gqxxs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.447593 4824 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/87c4f157-f66c-485a-b29d-db482b59c2a1-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.447945 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.448095 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49jr7\" (UniqueName: \"kubernetes.io/projected/87c4f157-f66c-485a-b29d-db482b59c2a1-kube-api-access-49jr7\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.448200 4824 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/87c4f157-f66c-485a-b29d-db482b59c2a1-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.448303 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/87c4f157-f66c-485a-b29d-db482b59c2a1-builder-dockercfg-gqxxs-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.448408 4824 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87c4f157-f66c-485a-b29d-db482b59c2a1-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.448507 4824 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/87c4f157-f66c-485a-b29d-db482b59c2a1-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.448611 4824 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.448682 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/87c4f157-f66c-485a-b29d-db482b59c2a1-builder-dockercfg-gqxxs-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.556541 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "87c4f157-f66c-485a-b29d-db482b59c2a1" (UID: "87c4f157-f66c-485a-b29d-db482b59c2a1"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:23:13 crc kubenswrapper[4824]: I0224 00:23:13.651745 4824 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:14 crc kubenswrapper[4824]: I0224 00:23:14.007079 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"87c4f157-f66c-485a-b29d-db482b59c2a1","Type":"ContainerDied","Data":"360ddab163ca47c1a0fb0338c2f478a6b9ea9e3e2cf8ffd6966eab886336bcb9"} Feb 24 00:23:14 crc kubenswrapper[4824]: I0224 00:23:14.007509 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="360ddab163ca47c1a0fb0338c2f478a6b9ea9e3e2cf8ffd6966eab886336bcb9" Feb 24 00:23:14 crc kubenswrapper[4824]: I0224 00:23:14.007369 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 24 00:23:15 crc kubenswrapper[4824]: I0224 00:23:15.137110 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "87c4f157-f66c-485a-b29d-db482b59c2a1" (UID: "87c4f157-f66c-485a-b29d-db482b59c2a1"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:23:15 crc kubenswrapper[4824]: I0224 00:23:15.185227 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/87c4f157-f66c-485a-b29d-db482b59c2a1-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.933233 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 24 00:23:17 crc kubenswrapper[4824]: E0224 00:23:17.934252 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e34f5eb-6bdc-406e-aaed-ff979c64f6db" containerName="extract-utilities" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.934274 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e34f5eb-6bdc-406e-aaed-ff979c64f6db" containerName="extract-utilities" Feb 24 00:23:17 crc kubenswrapper[4824]: E0224 00:23:17.934294 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c4f157-f66c-485a-b29d-db482b59c2a1" containerName="git-clone" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.934302 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c4f157-f66c-485a-b29d-db482b59c2a1" containerName="git-clone" Feb 24 00:23:17 crc kubenswrapper[4824]: E0224 00:23:17.934320 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5daf2179-5386-4221-a15d-0e9787959357" containerName="extract-utilities" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.934329 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="5daf2179-5386-4221-a15d-0e9787959357" containerName="extract-utilities" Feb 24 00:23:17 crc kubenswrapper[4824]: E0224 00:23:17.934345 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c4f157-f66c-485a-b29d-db482b59c2a1" containerName="manage-dockerfile" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.934352 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c4f157-f66c-485a-b29d-db482b59c2a1" containerName="manage-dockerfile" Feb 24 00:23:17 crc kubenswrapper[4824]: E0224 00:23:17.934361 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5daf2179-5386-4221-a15d-0e9787959357" containerName="registry-server" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.934370 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="5daf2179-5386-4221-a15d-0e9787959357" containerName="registry-server" Feb 24 00:23:17 crc kubenswrapper[4824]: E0224 00:23:17.934378 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ad9454d-0615-4303-a59f-fbcc1d18c56a" containerName="extract-utilities" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.934385 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad9454d-0615-4303-a59f-fbcc1d18c56a" containerName="extract-utilities" Feb 24 00:23:17 crc kubenswrapper[4824]: E0224 00:23:17.934395 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87c4f157-f66c-485a-b29d-db482b59c2a1" containerName="docker-build" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.934401 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="87c4f157-f66c-485a-b29d-db482b59c2a1" containerName="docker-build" Feb 24 00:23:17 crc kubenswrapper[4824]: E0224 00:23:17.934470 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e34f5eb-6bdc-406e-aaed-ff979c64f6db" containerName="registry-server" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.934479 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e34f5eb-6bdc-406e-aaed-ff979c64f6db" containerName="registry-server" Feb 24 00:23:17 crc kubenswrapper[4824]: E0224 00:23:17.934490 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ad9454d-0615-4303-a59f-fbcc1d18c56a" containerName="extract-content" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.934498 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad9454d-0615-4303-a59f-fbcc1d18c56a" containerName="extract-content" Feb 24 00:23:17 crc kubenswrapper[4824]: E0224 00:23:17.934509 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ad9454d-0615-4303-a59f-fbcc1d18c56a" containerName="registry-server" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.934536 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad9454d-0615-4303-a59f-fbcc1d18c56a" containerName="registry-server" Feb 24 00:23:17 crc kubenswrapper[4824]: E0224 00:23:17.934555 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5daf2179-5386-4221-a15d-0e9787959357" containerName="extract-content" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.934563 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="5daf2179-5386-4221-a15d-0e9787959357" containerName="extract-content" Feb 24 00:23:17 crc kubenswrapper[4824]: E0224 00:23:17.934579 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e34f5eb-6bdc-406e-aaed-ff979c64f6db" containerName="extract-content" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.934587 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e34f5eb-6bdc-406e-aaed-ff979c64f6db" containerName="extract-content" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.934753 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ad9454d-0615-4303-a59f-fbcc1d18c56a" containerName="registry-server" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.934765 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="5daf2179-5386-4221-a15d-0e9787959357" containerName="registry-server" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.934781 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e34f5eb-6bdc-406e-aaed-ff979c64f6db" containerName="registry-server" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.934791 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="87c4f157-f66c-485a-b29d-db482b59c2a1" containerName="docker-build" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.935558 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.943238 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.943491 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.943722 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-gqxxs" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.943875 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Feb 24 00:23:17 crc kubenswrapper[4824]: I0224 00:23:17.963949 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.130966 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-system-configs\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.131040 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.131063 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-container-storage-root\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.132203 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-builder-dockercfg-gqxxs-pull\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.132285 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsjxw\" (UniqueName: \"kubernetes.io/projected/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-kube-api-access-zsjxw\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.132360 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-buildcachedir\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.132395 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.132464 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.132613 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.132714 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-container-storage-run\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.132758 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-builder-dockercfg-gqxxs-push\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.132990 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-buildworkdir\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.234999 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-buildcachedir\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.235057 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.235084 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.235108 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.235148 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-container-storage-run\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.235175 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-builder-dockercfg-gqxxs-push\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.235424 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.235166 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-buildcachedir\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.236015 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-container-storage-run\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.236467 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-buildworkdir\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.236604 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.236733 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-buildworkdir\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.236807 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-system-configs\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.236893 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.236988 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-container-storage-root\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.237105 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-builder-dockercfg-gqxxs-pull\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.237216 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsjxw\" (UniqueName: \"kubernetes.io/projected/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-kube-api-access-zsjxw\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.237159 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-system-configs\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.237291 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-container-storage-root\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.237440 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.237829 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.242343 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-builder-dockercfg-gqxxs-pull\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.242408 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-builder-dockercfg-gqxxs-push\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.261164 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsjxw\" (UniqueName: \"kubernetes.io/projected/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-kube-api-access-zsjxw\") pod \"sg-core-1-build\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " pod="service-telemetry/sg-core-1-build" Feb 24 00:23:18 crc kubenswrapper[4824]: I0224 00:23:18.562312 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 24 00:23:19 crc kubenswrapper[4824]: I0224 00:23:19.003749 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 24 00:23:19 crc kubenswrapper[4824]: I0224 00:23:19.041187 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b","Type":"ContainerStarted","Data":"19bfab34288e937fc67ab246fc7a588fa300175ed336c770d21c78c7fed4ad39"} Feb 24 00:23:20 crc kubenswrapper[4824]: I0224 00:23:20.050693 4824 generic.go:334] "Generic (PLEG): container finished" podID="db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" containerID="7a64796f61c6b30ad2646949ad30cbd46a80419e26ed2eb9237a6fafd7349ce1" exitCode=0 Feb 24 00:23:20 crc kubenswrapper[4824]: I0224 00:23:20.050760 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b","Type":"ContainerDied","Data":"7a64796f61c6b30ad2646949ad30cbd46a80419e26ed2eb9237a6fafd7349ce1"} Feb 24 00:23:21 crc kubenswrapper[4824]: I0224 00:23:21.060171 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b","Type":"ContainerStarted","Data":"c19054bd4aa49237cbc4c413f5e316849d64d7268bfda6b2fafaf463a52bcf01"} Feb 24 00:23:21 crc kubenswrapper[4824]: I0224 00:23:21.085558 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=4.08553601 podStartE2EDuration="4.08553601s" podCreationTimestamp="2026-02-24 00:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:23:21.084798651 +0000 UTC m=+1065.074423120" watchObservedRunningTime="2026-02-24 00:23:21.08553601 +0000 UTC m=+1065.075160489" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.271239 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.272009 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" containerName="docker-build" containerID="cri-o://c19054bd4aa49237cbc4c413f5e316849d64d7268bfda6b2fafaf463a52bcf01" gracePeriod=30 Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.659898 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_db72b2fe-6196-47d8-bbff-4e52d2fa9d9b/docker-build/0.log" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.661029 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.804708 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-container-storage-run\") pod \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.804842 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-builder-dockercfg-gqxxs-push\") pod \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.804914 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-container-storage-root\") pod \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.804944 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsjxw\" (UniqueName: \"kubernetes.io/projected/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-kube-api-access-zsjxw\") pod \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.804969 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-buildworkdir\") pod \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.804995 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-proxy-ca-bundles\") pod \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.805073 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-ca-bundles\") pod \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.805352 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-builder-dockercfg-gqxxs-pull\") pod \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.805399 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-blob-cache\") pod \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.805422 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-system-configs\") pod \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.805446 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-buildcachedir\") pod \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.805481 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-node-pullsecrets\") pod \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\" (UID: \"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b\") " Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.805663 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" (UID: "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.805720 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" (UID: "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.805728 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" (UID: "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.806273 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" (UID: "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.807477 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" (UID: "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.807605 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" (UID: "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.807638 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" (UID: "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.808081 4824 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.808456 4824 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.808481 4824 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.808499 4824 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.808540 4824 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.808558 4824 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.808579 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.813732 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-builder-dockercfg-gqxxs-push" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-push") pod "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" (UID: "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b"). InnerVolumeSpecName "builder-dockercfg-gqxxs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.813943 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-kube-api-access-zsjxw" (OuterVolumeSpecName: "kube-api-access-zsjxw") pod "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" (UID: "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b"). InnerVolumeSpecName "kube-api-access-zsjxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.818747 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-builder-dockercfg-gqxxs-pull" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-pull") pod "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" (UID: "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b"). InnerVolumeSpecName "builder-dockercfg-gqxxs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.910287 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-builder-dockercfg-gqxxs-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.910685 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsjxw\" (UniqueName: \"kubernetes.io/projected/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-kube-api-access-zsjxw\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.910768 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-builder-dockercfg-gqxxs-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.936783 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" (UID: "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:23:28 crc kubenswrapper[4824]: I0224 00:23:28.950170 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" (UID: "db72b2fe-6196-47d8-bbff-4e52d2fa9d9b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.012716 4824 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.012765 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.119459 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_db72b2fe-6196-47d8-bbff-4e52d2fa9d9b/docker-build/0.log" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.120058 4824 generic.go:334] "Generic (PLEG): container finished" podID="db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" containerID="c19054bd4aa49237cbc4c413f5e316849d64d7268bfda6b2fafaf463a52bcf01" exitCode=1 Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.120126 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b","Type":"ContainerDied","Data":"c19054bd4aa49237cbc4c413f5e316849d64d7268bfda6b2fafaf463a52bcf01"} Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.120147 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.120172 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"db72b2fe-6196-47d8-bbff-4e52d2fa9d9b","Type":"ContainerDied","Data":"19bfab34288e937fc67ab246fc7a588fa300175ed336c770d21c78c7fed4ad39"} Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.120195 4824 scope.go:117] "RemoveContainer" containerID="c19054bd4aa49237cbc4c413f5e316849d64d7268bfda6b2fafaf463a52bcf01" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.151151 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.156249 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.172001 4824 scope.go:117] "RemoveContainer" containerID="7a64796f61c6b30ad2646949ad30cbd46a80419e26ed2eb9237a6fafd7349ce1" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.202324 4824 scope.go:117] "RemoveContainer" containerID="c19054bd4aa49237cbc4c413f5e316849d64d7268bfda6b2fafaf463a52bcf01" Feb 24 00:23:29 crc kubenswrapper[4824]: E0224 00:23:29.202926 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c19054bd4aa49237cbc4c413f5e316849d64d7268bfda6b2fafaf463a52bcf01\": container with ID starting with c19054bd4aa49237cbc4c413f5e316849d64d7268bfda6b2fafaf463a52bcf01 not found: ID does not exist" containerID="c19054bd4aa49237cbc4c413f5e316849d64d7268bfda6b2fafaf463a52bcf01" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.202988 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c19054bd4aa49237cbc4c413f5e316849d64d7268bfda6b2fafaf463a52bcf01"} err="failed to get container status \"c19054bd4aa49237cbc4c413f5e316849d64d7268bfda6b2fafaf463a52bcf01\": rpc error: code = NotFound desc = could not find container \"c19054bd4aa49237cbc4c413f5e316849d64d7268bfda6b2fafaf463a52bcf01\": container with ID starting with c19054bd4aa49237cbc4c413f5e316849d64d7268bfda6b2fafaf463a52bcf01 not found: ID does not exist" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.203021 4824 scope.go:117] "RemoveContainer" containerID="7a64796f61c6b30ad2646949ad30cbd46a80419e26ed2eb9237a6fafd7349ce1" Feb 24 00:23:29 crc kubenswrapper[4824]: E0224 00:23:29.203352 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a64796f61c6b30ad2646949ad30cbd46a80419e26ed2eb9237a6fafd7349ce1\": container with ID starting with 7a64796f61c6b30ad2646949ad30cbd46a80419e26ed2eb9237a6fafd7349ce1 not found: ID does not exist" containerID="7a64796f61c6b30ad2646949ad30cbd46a80419e26ed2eb9237a6fafd7349ce1" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.203372 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a64796f61c6b30ad2646949ad30cbd46a80419e26ed2eb9237a6fafd7349ce1"} err="failed to get container status \"7a64796f61c6b30ad2646949ad30cbd46a80419e26ed2eb9237a6fafd7349ce1\": rpc error: code = NotFound desc = could not find container \"7a64796f61c6b30ad2646949ad30cbd46a80419e26ed2eb9237a6fafd7349ce1\": container with ID starting with 7a64796f61c6b30ad2646949ad30cbd46a80419e26ed2eb9237a6fafd7349ce1 not found: ID does not exist" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.941824 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Feb 24 00:23:29 crc kubenswrapper[4824]: E0224 00:23:29.942225 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" containerName="manage-dockerfile" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.942250 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" containerName="manage-dockerfile" Feb 24 00:23:29 crc kubenswrapper[4824]: E0224 00:23:29.942283 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" containerName="docker-build" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.942294 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" containerName="docker-build" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.942491 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" containerName="docker-build" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.943888 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.946392 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-gqxxs" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.947732 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-sys-config" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.948106 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-global-ca" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.950794 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-ca" Feb 24 00:23:29 crc kubenswrapper[4824]: I0224 00:23:29.953318 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.128963 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-builder-dockercfg-gqxxs-pull\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.129006 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.129035 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.129248 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-buildworkdir\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.129287 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-system-configs\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.129358 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-buildcachedir\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.129394 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.129430 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-builder-dockercfg-gqxxs-push\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.129459 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.129537 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mcgr\" (UniqueName: \"kubernetes.io/projected/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-kube-api-access-4mcgr\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.129603 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-container-storage-root\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.129632 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-container-storage-run\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.230496 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-container-storage-root\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.230585 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-container-storage-run\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.230620 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-builder-dockercfg-gqxxs-pull\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.230639 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.230662 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.230715 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-buildworkdir\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.230737 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-system-configs\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.230794 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-buildcachedir\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.230815 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.230841 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-builder-dockercfg-gqxxs-push\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.230863 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.230883 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mcgr\" (UniqueName: \"kubernetes.io/projected/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-kube-api-access-4mcgr\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.230950 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-container-storage-root\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.231272 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-container-storage-run\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.231566 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.231844 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-buildcachedir\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.231880 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.232131 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-buildworkdir\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.232328 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-system-configs\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.232867 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.233110 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.239079 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-builder-dockercfg-gqxxs-pull\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.239116 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-builder-dockercfg-gqxxs-push\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.253262 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mcgr\" (UniqueName: \"kubernetes.io/projected/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-kube-api-access-4mcgr\") pod \"sg-core-2-build\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.302047 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.533247 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Feb 24 00:23:30 crc kubenswrapper[4824]: I0224 00:23:30.703239 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db72b2fe-6196-47d8-bbff-4e52d2fa9d9b" path="/var/lib/kubelet/pods/db72b2fe-6196-47d8-bbff-4e52d2fa9d9b/volumes" Feb 24 00:23:31 crc kubenswrapper[4824]: I0224 00:23:31.138489 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77c0907d-81a1-4a0c-81cf-cac502f6c8dc","Type":"ContainerStarted","Data":"010655e9ac8d5df123bdac2e01019ab3d59a1beff08289046366fcfe142aaa7e"} Feb 24 00:23:31 crc kubenswrapper[4824]: I0224 00:23:31.138906 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77c0907d-81a1-4a0c-81cf-cac502f6c8dc","Type":"ContainerStarted","Data":"b8562531694c8525f4bf4aec2b90a7ef10fb034a969d79ee542b0dea17176f13"} Feb 24 00:23:31 crc kubenswrapper[4824]: E0224 00:23:31.287018 4824 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.151:37134->38.102.83.151:38559: read tcp 38.102.83.151:37134->38.102.83.151:38559: read: connection reset by peer Feb 24 00:23:32 crc kubenswrapper[4824]: I0224 00:23:32.146907 4824 generic.go:334] "Generic (PLEG): container finished" podID="77c0907d-81a1-4a0c-81cf-cac502f6c8dc" containerID="010655e9ac8d5df123bdac2e01019ab3d59a1beff08289046366fcfe142aaa7e" exitCode=0 Feb 24 00:23:32 crc kubenswrapper[4824]: I0224 00:23:32.146979 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77c0907d-81a1-4a0c-81cf-cac502f6c8dc","Type":"ContainerDied","Data":"010655e9ac8d5df123bdac2e01019ab3d59a1beff08289046366fcfe142aaa7e"} Feb 24 00:23:33 crc kubenswrapper[4824]: I0224 00:23:33.156319 4824 generic.go:334] "Generic (PLEG): container finished" podID="77c0907d-81a1-4a0c-81cf-cac502f6c8dc" containerID="556b307c7c6ac911375fbe93a286eb266325bd855d629f38954bc83024b67b07" exitCode=0 Feb 24 00:23:33 crc kubenswrapper[4824]: I0224 00:23:33.156422 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77c0907d-81a1-4a0c-81cf-cac502f6c8dc","Type":"ContainerDied","Data":"556b307c7c6ac911375fbe93a286eb266325bd855d629f38954bc83024b67b07"} Feb 24 00:23:33 crc kubenswrapper[4824]: I0224 00:23:33.201611 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_77c0907d-81a1-4a0c-81cf-cac502f6c8dc/manage-dockerfile/0.log" Feb 24 00:23:34 crc kubenswrapper[4824]: I0224 00:23:34.170421 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77c0907d-81a1-4a0c-81cf-cac502f6c8dc","Type":"ContainerStarted","Data":"5924f945ac9743d4f5656ab07e6825552fcd0d8f58684ebca3f29b652db3d784"} Feb 24 00:23:34 crc kubenswrapper[4824]: I0224 00:23:34.211961 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=5.211935346 podStartE2EDuration="5.211935346s" podCreationTimestamp="2026-02-24 00:23:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:23:34.2062421 +0000 UTC m=+1078.195866599" watchObservedRunningTime="2026-02-24 00:23:34.211935346 +0000 UTC m=+1078.201559825" Feb 24 00:24:53 crc kubenswrapper[4824]: I0224 00:24:53.276756 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:24:53 crc kubenswrapper[4824]: I0224 00:24:53.277712 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:25:23 crc kubenswrapper[4824]: I0224 00:25:23.276711 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:25:23 crc kubenswrapper[4824]: I0224 00:25:23.277670 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:25:53 crc kubenswrapper[4824]: I0224 00:25:53.276111 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:25:53 crc kubenswrapper[4824]: I0224 00:25:53.276882 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:25:53 crc kubenswrapper[4824]: I0224 00:25:53.276943 4824 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:25:53 crc kubenswrapper[4824]: I0224 00:25:53.278584 4824 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32f31702ad77be87a49a0e3d023914422f1fbe192b728a29ed31dacaa99cc4eb"} pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:25:53 crc kubenswrapper[4824]: I0224 00:25:53.278735 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" containerID="cri-o://32f31702ad77be87a49a0e3d023914422f1fbe192b728a29ed31dacaa99cc4eb" gracePeriod=600 Feb 24 00:25:54 crc kubenswrapper[4824]: I0224 00:25:54.223103 4824 generic.go:334] "Generic (PLEG): container finished" podID="939ca085-9383-42e6-b7d6-37f101137273" containerID="32f31702ad77be87a49a0e3d023914422f1fbe192b728a29ed31dacaa99cc4eb" exitCode=0 Feb 24 00:25:54 crc kubenswrapper[4824]: I0224 00:25:54.223171 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerDied","Data":"32f31702ad77be87a49a0e3d023914422f1fbe192b728a29ed31dacaa99cc4eb"} Feb 24 00:25:54 crc kubenswrapper[4824]: I0224 00:25:54.223664 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerStarted","Data":"8d819df51f5c54106cb947aecba467be6a7835d606611afb3e6526ac4d026f80"} Feb 24 00:25:54 crc kubenswrapper[4824]: I0224 00:25:54.223699 4824 scope.go:117] "RemoveContainer" containerID="960af4768f01705e18d424766fe08bb2ebb2088d821d2cb697f31ab6e24ccd28" Feb 24 00:26:46 crc kubenswrapper[4824]: I0224 00:26:46.621266 4824 generic.go:334] "Generic (PLEG): container finished" podID="77c0907d-81a1-4a0c-81cf-cac502f6c8dc" containerID="5924f945ac9743d4f5656ab07e6825552fcd0d8f58684ebca3f29b652db3d784" exitCode=0 Feb 24 00:26:46 crc kubenswrapper[4824]: I0224 00:26:46.621358 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77c0907d-81a1-4a0c-81cf-cac502f6c8dc","Type":"ContainerDied","Data":"5924f945ac9743d4f5656ab07e6825552fcd0d8f58684ebca3f29b652db3d784"} Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.867936 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.929321 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-ca-bundles\") pod \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.929461 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-builder-dockercfg-gqxxs-push\") pod \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.929506 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-blob-cache\") pod \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.929544 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-container-storage-run\") pod \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.929598 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-buildcachedir\") pod \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.929627 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mcgr\" (UniqueName: \"kubernetes.io/projected/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-kube-api-access-4mcgr\") pod \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.929667 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-node-pullsecrets\") pod \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.929713 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-buildworkdir\") pod \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.929753 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-proxy-ca-bundles\") pod \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.929788 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-system-configs\") pod \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.929862 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-builder-dockercfg-gqxxs-pull\") pod \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.929897 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-container-storage-root\") pod \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\" (UID: \"77c0907d-81a1-4a0c-81cf-cac502f6c8dc\") " Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.931056 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "77c0907d-81a1-4a0c-81cf-cac502f6c8dc" (UID: "77c0907d-81a1-4a0c-81cf-cac502f6c8dc"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.931784 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "77c0907d-81a1-4a0c-81cf-cac502f6c8dc" (UID: "77c0907d-81a1-4a0c-81cf-cac502f6c8dc"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.933638 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "77c0907d-81a1-4a0c-81cf-cac502f6c8dc" (UID: "77c0907d-81a1-4a0c-81cf-cac502f6c8dc"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.934139 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "77c0907d-81a1-4a0c-81cf-cac502f6c8dc" (UID: "77c0907d-81a1-4a0c-81cf-cac502f6c8dc"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.934192 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "77c0907d-81a1-4a0c-81cf-cac502f6c8dc" (UID: "77c0907d-81a1-4a0c-81cf-cac502f6c8dc"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.934206 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "77c0907d-81a1-4a0c-81cf-cac502f6c8dc" (UID: "77c0907d-81a1-4a0c-81cf-cac502f6c8dc"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.939856 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-builder-dockercfg-gqxxs-pull" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-pull") pod "77c0907d-81a1-4a0c-81cf-cac502f6c8dc" (UID: "77c0907d-81a1-4a0c-81cf-cac502f6c8dc"). InnerVolumeSpecName "builder-dockercfg-gqxxs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.940727 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-builder-dockercfg-gqxxs-push" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-push") pod "77c0907d-81a1-4a0c-81cf-cac502f6c8dc" (UID: "77c0907d-81a1-4a0c-81cf-cac502f6c8dc"). InnerVolumeSpecName "builder-dockercfg-gqxxs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.940822 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-kube-api-access-4mcgr" (OuterVolumeSpecName: "kube-api-access-4mcgr") pod "77c0907d-81a1-4a0c-81cf-cac502f6c8dc" (UID: "77c0907d-81a1-4a0c-81cf-cac502f6c8dc"). InnerVolumeSpecName "kube-api-access-4mcgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:26:47 crc kubenswrapper[4824]: I0224 00:26:47.948684 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "77c0907d-81a1-4a0c-81cf-cac502f6c8dc" (UID: "77c0907d-81a1-4a0c-81cf-cac502f6c8dc"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:26:48 crc kubenswrapper[4824]: I0224 00:26:48.031737 4824 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:26:48 crc kubenswrapper[4824]: I0224 00:26:48.031786 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-builder-dockercfg-gqxxs-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:26:48 crc kubenswrapper[4824]: I0224 00:26:48.031796 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:26:48 crc kubenswrapper[4824]: I0224 00:26:48.031808 4824 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:26:48 crc kubenswrapper[4824]: I0224 00:26:48.031818 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mcgr\" (UniqueName: \"kubernetes.io/projected/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-kube-api-access-4mcgr\") on node \"crc\" DevicePath \"\"" Feb 24 00:26:48 crc kubenswrapper[4824]: I0224 00:26:48.031829 4824 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:26:48 crc kubenswrapper[4824]: I0224 00:26:48.031838 4824 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:26:48 crc kubenswrapper[4824]: I0224 00:26:48.031847 4824 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:26:48 crc kubenswrapper[4824]: I0224 00:26:48.031856 4824 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:26:48 crc kubenswrapper[4824]: I0224 00:26:48.031866 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-builder-dockercfg-gqxxs-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:26:48 crc kubenswrapper[4824]: I0224 00:26:48.323925 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "77c0907d-81a1-4a0c-81cf-cac502f6c8dc" (UID: "77c0907d-81a1-4a0c-81cf-cac502f6c8dc"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:26:48 crc kubenswrapper[4824]: I0224 00:26:48.337139 4824 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:26:48 crc kubenswrapper[4824]: I0224 00:26:48.639165 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77c0907d-81a1-4a0c-81cf-cac502f6c8dc","Type":"ContainerDied","Data":"b8562531694c8525f4bf4aec2b90a7ef10fb034a969d79ee542b0dea17176f13"} Feb 24 00:26:48 crc kubenswrapper[4824]: I0224 00:26:48.639231 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8562531694c8525f4bf4aec2b90a7ef10fb034a969d79ee542b0dea17176f13" Feb 24 00:26:48 crc kubenswrapper[4824]: I0224 00:26:48.639232 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 24 00:26:50 crc kubenswrapper[4824]: I0224 00:26:50.535683 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "77c0907d-81a1-4a0c-81cf-cac502f6c8dc" (UID: "77c0907d-81a1-4a0c-81cf-cac502f6c8dc"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:26:50 crc kubenswrapper[4824]: I0224 00:26:50.578173 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77c0907d-81a1-4a0c-81cf-cac502f6c8dc-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.665335 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 24 00:26:52 crc kubenswrapper[4824]: E0224 00:26:52.666080 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77c0907d-81a1-4a0c-81cf-cac502f6c8dc" containerName="git-clone" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.666097 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="77c0907d-81a1-4a0c-81cf-cac502f6c8dc" containerName="git-clone" Feb 24 00:26:52 crc kubenswrapper[4824]: E0224 00:26:52.666118 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77c0907d-81a1-4a0c-81cf-cac502f6c8dc" containerName="docker-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.666124 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="77c0907d-81a1-4a0c-81cf-cac502f6c8dc" containerName="docker-build" Feb 24 00:26:52 crc kubenswrapper[4824]: E0224 00:26:52.666136 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77c0907d-81a1-4a0c-81cf-cac502f6c8dc" containerName="manage-dockerfile" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.666142 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="77c0907d-81a1-4a0c-81cf-cac502f6c8dc" containerName="manage-dockerfile" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.666268 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="77c0907d-81a1-4a0c-81cf-cac502f6c8dc" containerName="docker-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.667065 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.669442 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-gqxxs" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.669809 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-sys-config" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.670014 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-global-ca" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.670263 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-ca" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.704541 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.734607 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.734661 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/af6c88fc-9fa1-46aa-9060-3d202479481c-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.734706 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/af6c88fc-9fa1-46aa-9060-3d202479481c-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.734732 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/af6c88fc-9fa1-46aa-9060-3d202479481c-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.734754 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.734790 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/af6c88fc-9fa1-46aa-9060-3d202479481c-builder-dockercfg-gqxxs-pull\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.734816 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.734849 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/af6c88fc-9fa1-46aa-9060-3d202479481c-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.734888 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/af6c88fc-9fa1-46aa-9060-3d202479481c-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.734923 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/af6c88fc-9fa1-46aa-9060-3d202479481c-builder-dockercfg-gqxxs-push\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.734990 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.735021 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49sqs\" (UniqueName: \"kubernetes.io/projected/af6c88fc-9fa1-46aa-9060-3d202479481c-kube-api-access-49sqs\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.837058 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/af6c88fc-9fa1-46aa-9060-3d202479481c-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.837200 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/af6c88fc-9fa1-46aa-9060-3d202479481c-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.837255 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/af6c88fc-9fa1-46aa-9060-3d202479481c-builder-dockercfg-gqxxs-push\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.837313 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.837337 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49sqs\" (UniqueName: \"kubernetes.io/projected/af6c88fc-9fa1-46aa-9060-3d202479481c-kube-api-access-49sqs\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.837861 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.837919 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.837962 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/af6c88fc-9fa1-46aa-9060-3d202479481c-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.838013 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/af6c88fc-9fa1-46aa-9060-3d202479481c-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.838052 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.838096 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/af6c88fc-9fa1-46aa-9060-3d202479481c-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.838121 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/af6c88fc-9fa1-46aa-9060-3d202479481c-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.838315 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/af6c88fc-9fa1-46aa-9060-3d202479481c-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.838570 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/af6c88fc-9fa1-46aa-9060-3d202479481c-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.838616 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.838617 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/af6c88fc-9fa1-46aa-9060-3d202479481c-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.838658 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/af6c88fc-9fa1-46aa-9060-3d202479481c-builder-dockercfg-gqxxs-pull\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.838679 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/af6c88fc-9fa1-46aa-9060-3d202479481c-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.838788 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:52 crc kubenswrapper[4824]: I0224 00:26:52.839022 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:53 crc kubenswrapper[4824]: I0224 00:26:53.324976 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:53 crc kubenswrapper[4824]: I0224 00:26:53.325639 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/af6c88fc-9fa1-46aa-9060-3d202479481c-builder-dockercfg-gqxxs-pull\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:53 crc kubenswrapper[4824]: I0224 00:26:53.325684 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/af6c88fc-9fa1-46aa-9060-3d202479481c-builder-dockercfg-gqxxs-push\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:53 crc kubenswrapper[4824]: I0224 00:26:53.329057 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49sqs\" (UniqueName: \"kubernetes.io/projected/af6c88fc-9fa1-46aa-9060-3d202479481c-kube-api-access-49sqs\") pod \"sg-bridge-1-build\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:53 crc kubenswrapper[4824]: I0224 00:26:53.587884 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 24 00:26:53 crc kubenswrapper[4824]: I0224 00:26:53.811000 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 24 00:26:54 crc kubenswrapper[4824]: I0224 00:26:54.682925 4824 generic.go:334] "Generic (PLEG): container finished" podID="af6c88fc-9fa1-46aa-9060-3d202479481c" containerID="6597cb9e10c28b83d05743f49d3783a414315524b71024badd2e95d60b7946b9" exitCode=0 Feb 24 00:26:54 crc kubenswrapper[4824]: I0224 00:26:54.683012 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"af6c88fc-9fa1-46aa-9060-3d202479481c","Type":"ContainerDied","Data":"6597cb9e10c28b83d05743f49d3783a414315524b71024badd2e95d60b7946b9"} Feb 24 00:26:54 crc kubenswrapper[4824]: I0224 00:26:54.683543 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"af6c88fc-9fa1-46aa-9060-3d202479481c","Type":"ContainerStarted","Data":"1dd6865d475a5af0f51171b6fff366385505696bcae6478e182f5800edc3ac14"} Feb 24 00:26:55 crc kubenswrapper[4824]: I0224 00:26:55.695015 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"af6c88fc-9fa1-46aa-9060-3d202479481c","Type":"ContainerStarted","Data":"36ea10b047001c97d42518d7957d7822f59fe55dfe1fae6a92c5691f9b7ed45f"} Feb 24 00:26:55 crc kubenswrapper[4824]: I0224 00:26:55.726028 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.7260061110000002 podStartE2EDuration="3.726006111s" podCreationTimestamp="2026-02-24 00:26:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:26:55.720337678 +0000 UTC m=+1279.709962167" watchObservedRunningTime="2026-02-24 00:26:55.726006111 +0000 UTC m=+1279.715630600" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.334084 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.335031 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="af6c88fc-9fa1-46aa-9060-3d202479481c" containerName="docker-build" containerID="cri-o://36ea10b047001c97d42518d7957d7822f59fe55dfe1fae6a92c5691f9b7ed45f" gracePeriod=30 Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.697447 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_af6c88fc-9fa1-46aa-9060-3d202479481c/docker-build/0.log" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.698234 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.712091 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/af6c88fc-9fa1-46aa-9060-3d202479481c-builder-dockercfg-gqxxs-pull\") pod \"af6c88fc-9fa1-46aa-9060-3d202479481c\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.712623 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/af6c88fc-9fa1-46aa-9060-3d202479481c-build-system-configs\") pod \"af6c88fc-9fa1-46aa-9060-3d202479481c\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.712706 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-container-storage-run\") pod \"af6c88fc-9fa1-46aa-9060-3d202479481c\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.712753 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-buildworkdir\") pod \"af6c88fc-9fa1-46aa-9060-3d202479481c\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.712786 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/af6c88fc-9fa1-46aa-9060-3d202479481c-node-pullsecrets\") pod \"af6c88fc-9fa1-46aa-9060-3d202479481c\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.712842 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49sqs\" (UniqueName: \"kubernetes.io/projected/af6c88fc-9fa1-46aa-9060-3d202479481c-kube-api-access-49sqs\") pod \"af6c88fc-9fa1-46aa-9060-3d202479481c\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.712890 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/af6c88fc-9fa1-46aa-9060-3d202479481c-buildcachedir\") pod \"af6c88fc-9fa1-46aa-9060-3d202479481c\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.712932 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-container-storage-root\") pod \"af6c88fc-9fa1-46aa-9060-3d202479481c\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.712965 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/af6c88fc-9fa1-46aa-9060-3d202479481c-builder-dockercfg-gqxxs-push\") pod \"af6c88fc-9fa1-46aa-9060-3d202479481c\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.712993 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/af6c88fc-9fa1-46aa-9060-3d202479481c-build-proxy-ca-bundles\") pod \"af6c88fc-9fa1-46aa-9060-3d202479481c\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.713054 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/af6c88fc-9fa1-46aa-9060-3d202479481c-build-ca-bundles\") pod \"af6c88fc-9fa1-46aa-9060-3d202479481c\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.713085 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-build-blob-cache\") pod \"af6c88fc-9fa1-46aa-9060-3d202479481c\" (UID: \"af6c88fc-9fa1-46aa-9060-3d202479481c\") " Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.714010 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6c88fc-9fa1-46aa-9060-3d202479481c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "af6c88fc-9fa1-46aa-9060-3d202479481c" (UID: "af6c88fc-9fa1-46aa-9060-3d202479481c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.714526 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6c88fc-9fa1-46aa-9060-3d202479481c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "af6c88fc-9fa1-46aa-9060-3d202479481c" (UID: "af6c88fc-9fa1-46aa-9060-3d202479481c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.715014 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "af6c88fc-9fa1-46aa-9060-3d202479481c" (UID: "af6c88fc-9fa1-46aa-9060-3d202479481c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.715029 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6c88fc-9fa1-46aa-9060-3d202479481c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "af6c88fc-9fa1-46aa-9060-3d202479481c" (UID: "af6c88fc-9fa1-46aa-9060-3d202479481c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.715475 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6c88fc-9fa1-46aa-9060-3d202479481c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "af6c88fc-9fa1-46aa-9060-3d202479481c" (UID: "af6c88fc-9fa1-46aa-9060-3d202479481c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.715576 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af6c88fc-9fa1-46aa-9060-3d202479481c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "af6c88fc-9fa1-46aa-9060-3d202479481c" (UID: "af6c88fc-9fa1-46aa-9060-3d202479481c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.715846 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "af6c88fc-9fa1-46aa-9060-3d202479481c" (UID: "af6c88fc-9fa1-46aa-9060-3d202479481c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.718503 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6c88fc-9fa1-46aa-9060-3d202479481c-builder-dockercfg-gqxxs-push" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-push") pod "af6c88fc-9fa1-46aa-9060-3d202479481c" (UID: "af6c88fc-9fa1-46aa-9060-3d202479481c"). InnerVolumeSpecName "builder-dockercfg-gqxxs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.719145 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6c88fc-9fa1-46aa-9060-3d202479481c-kube-api-access-49sqs" (OuterVolumeSpecName: "kube-api-access-49sqs") pod "af6c88fc-9fa1-46aa-9060-3d202479481c" (UID: "af6c88fc-9fa1-46aa-9060-3d202479481c"). InnerVolumeSpecName "kube-api-access-49sqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.719653 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6c88fc-9fa1-46aa-9060-3d202479481c-builder-dockercfg-gqxxs-pull" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-pull") pod "af6c88fc-9fa1-46aa-9060-3d202479481c" (UID: "af6c88fc-9fa1-46aa-9060-3d202479481c"). InnerVolumeSpecName "builder-dockercfg-gqxxs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.766163 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_af6c88fc-9fa1-46aa-9060-3d202479481c/docker-build/0.log" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.766631 4824 generic.go:334] "Generic (PLEG): container finished" podID="af6c88fc-9fa1-46aa-9060-3d202479481c" containerID="36ea10b047001c97d42518d7957d7822f59fe55dfe1fae6a92c5691f9b7ed45f" exitCode=1 Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.766701 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"af6c88fc-9fa1-46aa-9060-3d202479481c","Type":"ContainerDied","Data":"36ea10b047001c97d42518d7957d7822f59fe55dfe1fae6a92c5691f9b7ed45f"} Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.766780 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"af6c88fc-9fa1-46aa-9060-3d202479481c","Type":"ContainerDied","Data":"1dd6865d475a5af0f51171b6fff366385505696bcae6478e182f5800edc3ac14"} Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.766771 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.766800 4824 scope.go:117] "RemoveContainer" containerID="36ea10b047001c97d42518d7957d7822f59fe55dfe1fae6a92c5691f9b7ed45f" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.789919 4824 scope.go:117] "RemoveContainer" containerID="6597cb9e10c28b83d05743f49d3783a414315524b71024badd2e95d60b7946b9" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.810018 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "af6c88fc-9fa1-46aa-9060-3d202479481c" (UID: "af6c88fc-9fa1-46aa-9060-3d202479481c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.815030 4824 scope.go:117] "RemoveContainer" containerID="36ea10b047001c97d42518d7957d7822f59fe55dfe1fae6a92c5691f9b7ed45f" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.815308 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.815340 4824 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.815352 4824 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/af6c88fc-9fa1-46aa-9060-3d202479481c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.815361 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49sqs\" (UniqueName: \"kubernetes.io/projected/af6c88fc-9fa1-46aa-9060-3d202479481c-kube-api-access-49sqs\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.815370 4824 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/af6c88fc-9fa1-46aa-9060-3d202479481c-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.815379 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/af6c88fc-9fa1-46aa-9060-3d202479481c-builder-dockercfg-gqxxs-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.815388 4824 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/af6c88fc-9fa1-46aa-9060-3d202479481c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.815399 4824 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/af6c88fc-9fa1-46aa-9060-3d202479481c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.815407 4824 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.815415 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/af6c88fc-9fa1-46aa-9060-3d202479481c-builder-dockercfg-gqxxs-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.815423 4824 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/af6c88fc-9fa1-46aa-9060-3d202479481c-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:03 crc kubenswrapper[4824]: E0224 00:27:03.815543 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36ea10b047001c97d42518d7957d7822f59fe55dfe1fae6a92c5691f9b7ed45f\": container with ID starting with 36ea10b047001c97d42518d7957d7822f59fe55dfe1fae6a92c5691f9b7ed45f not found: ID does not exist" containerID="36ea10b047001c97d42518d7957d7822f59fe55dfe1fae6a92c5691f9b7ed45f" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.815591 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36ea10b047001c97d42518d7957d7822f59fe55dfe1fae6a92c5691f9b7ed45f"} err="failed to get container status \"36ea10b047001c97d42518d7957d7822f59fe55dfe1fae6a92c5691f9b7ed45f\": rpc error: code = NotFound desc = could not find container \"36ea10b047001c97d42518d7957d7822f59fe55dfe1fae6a92c5691f9b7ed45f\": container with ID starting with 36ea10b047001c97d42518d7957d7822f59fe55dfe1fae6a92c5691f9b7ed45f not found: ID does not exist" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.815628 4824 scope.go:117] "RemoveContainer" containerID="6597cb9e10c28b83d05743f49d3783a414315524b71024badd2e95d60b7946b9" Feb 24 00:27:03 crc kubenswrapper[4824]: E0224 00:27:03.815980 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6597cb9e10c28b83d05743f49d3783a414315524b71024badd2e95d60b7946b9\": container with ID starting with 6597cb9e10c28b83d05743f49d3783a414315524b71024badd2e95d60b7946b9 not found: ID does not exist" containerID="6597cb9e10c28b83d05743f49d3783a414315524b71024badd2e95d60b7946b9" Feb 24 00:27:03 crc kubenswrapper[4824]: I0224 00:27:03.816017 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6597cb9e10c28b83d05743f49d3783a414315524b71024badd2e95d60b7946b9"} err="failed to get container status \"6597cb9e10c28b83d05743f49d3783a414315524b71024badd2e95d60b7946b9\": rpc error: code = NotFound desc = could not find container \"6597cb9e10c28b83d05743f49d3783a414315524b71024badd2e95d60b7946b9\": container with ID starting with 6597cb9e10c28b83d05743f49d3783a414315524b71024badd2e95d60b7946b9 not found: ID does not exist" Feb 24 00:27:04 crc kubenswrapper[4824]: I0224 00:27:04.096153 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "af6c88fc-9fa1-46aa-9060-3d202479481c" (UID: "af6c88fc-9fa1-46aa-9060-3d202479481c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:27:04 crc kubenswrapper[4824]: I0224 00:27:04.120428 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/af6c88fc-9fa1-46aa-9060-3d202479481c-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:04 crc kubenswrapper[4824]: I0224 00:27:04.412169 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 24 00:27:04 crc kubenswrapper[4824]: I0224 00:27:04.419039 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 24 00:27:04 crc kubenswrapper[4824]: I0224 00:27:04.703148 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af6c88fc-9fa1-46aa-9060-3d202479481c" path="/var/lib/kubelet/pods/af6c88fc-9fa1-46aa-9060-3d202479481c/volumes" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.168560 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Feb 24 00:27:05 crc kubenswrapper[4824]: E0224 00:27:05.169372 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af6c88fc-9fa1-46aa-9060-3d202479481c" containerName="manage-dockerfile" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.169457 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6c88fc-9fa1-46aa-9060-3d202479481c" containerName="manage-dockerfile" Feb 24 00:27:05 crc kubenswrapper[4824]: E0224 00:27:05.169573 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af6c88fc-9fa1-46aa-9060-3d202479481c" containerName="docker-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.169652 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6c88fc-9fa1-46aa-9060-3d202479481c" containerName="docker-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.169865 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6c88fc-9fa1-46aa-9060-3d202479481c" containerName="docker-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.171027 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.173762 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-sys-config" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.173834 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-ca" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.174178 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-gqxxs" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.175605 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-global-ca" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.192183 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.235099 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.235158 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e64cc79f-399d-4a53-b509-a6618d565cbf-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.235183 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e64cc79f-399d-4a53-b509-a6618d565cbf-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.235205 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.235226 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.235483 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnvml\" (UniqueName: \"kubernetes.io/projected/e64cc79f-399d-4a53-b509-a6618d565cbf-kube-api-access-bnvml\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.235611 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e64cc79f-399d-4a53-b509-a6618d565cbf-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.235733 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/e64cc79f-399d-4a53-b509-a6618d565cbf-builder-dockercfg-gqxxs-pull\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.235823 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e64cc79f-399d-4a53-b509-a6618d565cbf-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.235868 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/e64cc79f-399d-4a53-b509-a6618d565cbf-builder-dockercfg-gqxxs-push\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.235962 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.236092 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e64cc79f-399d-4a53-b509-a6618d565cbf-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.337488 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.337874 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e64cc79f-399d-4a53-b509-a6618d565cbf-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.337961 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e64cc79f-399d-4a53-b509-a6618d565cbf-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.338089 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.338184 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e64cc79f-399d-4a53-b509-a6618d565cbf-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.338279 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.338222 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.338424 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnvml\" (UniqueName: \"kubernetes.io/projected/e64cc79f-399d-4a53-b509-a6618d565cbf-kube-api-access-bnvml\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.338472 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e64cc79f-399d-4a53-b509-a6618d565cbf-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.338676 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/e64cc79f-399d-4a53-b509-a6618d565cbf-builder-dockercfg-gqxxs-pull\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.338744 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e64cc79f-399d-4a53-b509-a6618d565cbf-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.338795 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/e64cc79f-399d-4a53-b509-a6618d565cbf-builder-dockercfg-gqxxs-push\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.338868 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.338887 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.338887 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e64cc79f-399d-4a53-b509-a6618d565cbf-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.339029 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e64cc79f-399d-4a53-b509-a6618d565cbf-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.339168 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.339205 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e64cc79f-399d-4a53-b509-a6618d565cbf-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.339394 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.339848 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e64cc79f-399d-4a53-b509-a6618d565cbf-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.340020 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e64cc79f-399d-4a53-b509-a6618d565cbf-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.354743 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/e64cc79f-399d-4a53-b509-a6618d565cbf-builder-dockercfg-gqxxs-pull\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.356215 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/e64cc79f-399d-4a53-b509-a6618d565cbf-builder-dockercfg-gqxxs-push\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.358058 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnvml\" (UniqueName: \"kubernetes.io/projected/e64cc79f-399d-4a53-b509-a6618d565cbf-kube-api-access-bnvml\") pod \"sg-bridge-2-build\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.488644 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.744856 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Feb 24 00:27:05 crc kubenswrapper[4824]: I0224 00:27:05.787545 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"e64cc79f-399d-4a53-b509-a6618d565cbf","Type":"ContainerStarted","Data":"954236d9ba17ac1f320cbfdd5017cf9874f771c683ae5d5acffcb803cd49b53e"} Feb 24 00:27:06 crc kubenswrapper[4824]: I0224 00:27:06.798466 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"e64cc79f-399d-4a53-b509-a6618d565cbf","Type":"ContainerStarted","Data":"da3bb2a807bfead3b4300c93db29f7a5d7e778cfc90eded40f001b306353f2f6"} Feb 24 00:27:07 crc kubenswrapper[4824]: I0224 00:27:07.807614 4824 generic.go:334] "Generic (PLEG): container finished" podID="e64cc79f-399d-4a53-b509-a6618d565cbf" containerID="da3bb2a807bfead3b4300c93db29f7a5d7e778cfc90eded40f001b306353f2f6" exitCode=0 Feb 24 00:27:07 crc kubenswrapper[4824]: I0224 00:27:07.807701 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"e64cc79f-399d-4a53-b509-a6618d565cbf","Type":"ContainerDied","Data":"da3bb2a807bfead3b4300c93db29f7a5d7e778cfc90eded40f001b306353f2f6"} Feb 24 00:27:08 crc kubenswrapper[4824]: I0224 00:27:08.819343 4824 generic.go:334] "Generic (PLEG): container finished" podID="e64cc79f-399d-4a53-b509-a6618d565cbf" containerID="d8656546214a503366dda5373f672466c18181aa2f185edec71172c976bece6d" exitCode=0 Feb 24 00:27:08 crc kubenswrapper[4824]: I0224 00:27:08.819446 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"e64cc79f-399d-4a53-b509-a6618d565cbf","Type":"ContainerDied","Data":"d8656546214a503366dda5373f672466c18181aa2f185edec71172c976bece6d"} Feb 24 00:27:08 crc kubenswrapper[4824]: I0224 00:27:08.866187 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_e64cc79f-399d-4a53-b509-a6618d565cbf/manage-dockerfile/0.log" Feb 24 00:27:09 crc kubenswrapper[4824]: I0224 00:27:09.831489 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"e64cc79f-399d-4a53-b509-a6618d565cbf","Type":"ContainerStarted","Data":"09c4d2134f2b9fafd8744d2eff2f7d16da5b901065d6281daff1b2c6fd989a55"} Feb 24 00:27:09 crc kubenswrapper[4824]: I0224 00:27:09.880998 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=4.880968065 podStartE2EDuration="4.880968065s" podCreationTimestamp="2026-02-24 00:27:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:27:09.872905512 +0000 UTC m=+1293.862530021" watchObservedRunningTime="2026-02-24 00:27:09.880968065 +0000 UTC m=+1293.870592584" Feb 24 00:27:53 crc kubenswrapper[4824]: I0224 00:27:53.276180 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:27:53 crc kubenswrapper[4824]: I0224 00:27:53.277132 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:27:54 crc kubenswrapper[4824]: I0224 00:27:54.137697 4824 generic.go:334] "Generic (PLEG): container finished" podID="e64cc79f-399d-4a53-b509-a6618d565cbf" containerID="09c4d2134f2b9fafd8744d2eff2f7d16da5b901065d6281daff1b2c6fd989a55" exitCode=0 Feb 24 00:27:54 crc kubenswrapper[4824]: I0224 00:27:54.137776 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"e64cc79f-399d-4a53-b509-a6618d565cbf","Type":"ContainerDied","Data":"09c4d2134f2b9fafd8744d2eff2f7d16da5b901065d6281daff1b2c6fd989a55"} Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.474316 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.607898 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-build-blob-cache\") pod \"e64cc79f-399d-4a53-b509-a6618d565cbf\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.607980 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnvml\" (UniqueName: \"kubernetes.io/projected/e64cc79f-399d-4a53-b509-a6618d565cbf-kube-api-access-bnvml\") pod \"e64cc79f-399d-4a53-b509-a6618d565cbf\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.608025 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e64cc79f-399d-4a53-b509-a6618d565cbf-buildcachedir\") pod \"e64cc79f-399d-4a53-b509-a6618d565cbf\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.608055 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e64cc79f-399d-4a53-b509-a6618d565cbf-build-system-configs\") pod \"e64cc79f-399d-4a53-b509-a6618d565cbf\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.608099 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/e64cc79f-399d-4a53-b509-a6618d565cbf-builder-dockercfg-gqxxs-push\") pod \"e64cc79f-399d-4a53-b509-a6618d565cbf\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.608141 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e64cc79f-399d-4a53-b509-a6618d565cbf-build-proxy-ca-bundles\") pod \"e64cc79f-399d-4a53-b509-a6618d565cbf\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.608201 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-container-storage-root\") pod \"e64cc79f-399d-4a53-b509-a6618d565cbf\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.608243 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-buildworkdir\") pod \"e64cc79f-399d-4a53-b509-a6618d565cbf\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.608281 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-container-storage-run\") pod \"e64cc79f-399d-4a53-b509-a6618d565cbf\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.608262 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e64cc79f-399d-4a53-b509-a6618d565cbf-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "e64cc79f-399d-4a53-b509-a6618d565cbf" (UID: "e64cc79f-399d-4a53-b509-a6618d565cbf"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.608297 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e64cc79f-399d-4a53-b509-a6618d565cbf-node-pullsecrets\") pod \"e64cc79f-399d-4a53-b509-a6618d565cbf\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.608379 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e64cc79f-399d-4a53-b509-a6618d565cbf-build-ca-bundles\") pod \"e64cc79f-399d-4a53-b509-a6618d565cbf\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.608390 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e64cc79f-399d-4a53-b509-a6618d565cbf-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "e64cc79f-399d-4a53-b509-a6618d565cbf" (UID: "e64cc79f-399d-4a53-b509-a6618d565cbf"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.608408 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/e64cc79f-399d-4a53-b509-a6618d565cbf-builder-dockercfg-gqxxs-pull\") pod \"e64cc79f-399d-4a53-b509-a6618d565cbf\" (UID: \"e64cc79f-399d-4a53-b509-a6618d565cbf\") " Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.608625 4824 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e64cc79f-399d-4a53-b509-a6618d565cbf-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.608637 4824 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e64cc79f-399d-4a53-b509-a6618d565cbf-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.609770 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e64cc79f-399d-4a53-b509-a6618d565cbf-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "e64cc79f-399d-4a53-b509-a6618d565cbf" (UID: "e64cc79f-399d-4a53-b509-a6618d565cbf"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.609905 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e64cc79f-399d-4a53-b509-a6618d565cbf-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "e64cc79f-399d-4a53-b509-a6618d565cbf" (UID: "e64cc79f-399d-4a53-b509-a6618d565cbf"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.610314 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "e64cc79f-399d-4a53-b509-a6618d565cbf" (UID: "e64cc79f-399d-4a53-b509-a6618d565cbf"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.610665 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e64cc79f-399d-4a53-b509-a6618d565cbf-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "e64cc79f-399d-4a53-b509-a6618d565cbf" (UID: "e64cc79f-399d-4a53-b509-a6618d565cbf"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.610851 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "e64cc79f-399d-4a53-b509-a6618d565cbf" (UID: "e64cc79f-399d-4a53-b509-a6618d565cbf"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.615912 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e64cc79f-399d-4a53-b509-a6618d565cbf-kube-api-access-bnvml" (OuterVolumeSpecName: "kube-api-access-bnvml") pod "e64cc79f-399d-4a53-b509-a6618d565cbf" (UID: "e64cc79f-399d-4a53-b509-a6618d565cbf"). InnerVolumeSpecName "kube-api-access-bnvml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.616034 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64cc79f-399d-4a53-b509-a6618d565cbf-builder-dockercfg-gqxxs-push" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-push") pod "e64cc79f-399d-4a53-b509-a6618d565cbf" (UID: "e64cc79f-399d-4a53-b509-a6618d565cbf"). InnerVolumeSpecName "builder-dockercfg-gqxxs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.617114 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e64cc79f-399d-4a53-b509-a6618d565cbf-builder-dockercfg-gqxxs-pull" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-pull") pod "e64cc79f-399d-4a53-b509-a6618d565cbf" (UID: "e64cc79f-399d-4a53-b509-a6618d565cbf"). InnerVolumeSpecName "builder-dockercfg-gqxxs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.710429 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnvml\" (UniqueName: \"kubernetes.io/projected/e64cc79f-399d-4a53-b509-a6618d565cbf-kube-api-access-bnvml\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.710490 4824 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e64cc79f-399d-4a53-b509-a6618d565cbf-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.710509 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/e64cc79f-399d-4a53-b509-a6618d565cbf-builder-dockercfg-gqxxs-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.710563 4824 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e64cc79f-399d-4a53-b509-a6618d565cbf-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.710583 4824 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.710605 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.710623 4824 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e64cc79f-399d-4a53-b509-a6618d565cbf-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.710642 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/e64cc79f-399d-4a53-b509-a6618d565cbf-builder-dockercfg-gqxxs-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.740474 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "e64cc79f-399d-4a53-b509-a6618d565cbf" (UID: "e64cc79f-399d-4a53-b509-a6618d565cbf"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:27:55 crc kubenswrapper[4824]: I0224 00:27:55.812199 4824 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:56 crc kubenswrapper[4824]: I0224 00:27:56.154185 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"e64cc79f-399d-4a53-b509-a6618d565cbf","Type":"ContainerDied","Data":"954236d9ba17ac1f320cbfdd5017cf9874f771c683ae5d5acffcb803cd49b53e"} Feb 24 00:27:56 crc kubenswrapper[4824]: I0224 00:27:56.154235 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="954236d9ba17ac1f320cbfdd5017cf9874f771c683ae5d5acffcb803cd49b53e" Feb 24 00:27:56 crc kubenswrapper[4824]: I0224 00:27:56.154341 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 24 00:27:56 crc kubenswrapper[4824]: I0224 00:27:56.324040 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "e64cc79f-399d-4a53-b509-a6618d565cbf" (UID: "e64cc79f-399d-4a53-b509-a6618d565cbf"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:27:56 crc kubenswrapper[4824]: I0224 00:27:56.420368 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e64cc79f-399d-4a53-b509-a6618d565cbf-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.449337 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 24 00:27:59 crc kubenswrapper[4824]: E0224 00:27:59.450629 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64cc79f-399d-4a53-b509-a6618d565cbf" containerName="git-clone" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.450649 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64cc79f-399d-4a53-b509-a6618d565cbf" containerName="git-clone" Feb 24 00:27:59 crc kubenswrapper[4824]: E0224 00:27:59.450664 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64cc79f-399d-4a53-b509-a6618d565cbf" containerName="docker-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.450671 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64cc79f-399d-4a53-b509-a6618d565cbf" containerName="docker-build" Feb 24 00:27:59 crc kubenswrapper[4824]: E0224 00:27:59.450694 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64cc79f-399d-4a53-b509-a6618d565cbf" containerName="manage-dockerfile" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.450704 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64cc79f-399d-4a53-b509-a6618d565cbf" containerName="manage-dockerfile" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.450850 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="e64cc79f-399d-4a53-b509-a6618d565cbf" containerName="docker-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.451824 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.456057 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-gqxxs" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.457809 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-sys-config" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.458809 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-global-ca" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.458957 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-ca" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.468067 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.579010 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc2pf\" (UniqueName: \"kubernetes.io/projected/b4ba4a5c-0b11-4224-93c1-916afc845dad-kube-api-access-nc2pf\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.579708 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.579920 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.580020 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/b4ba4a5c-0b11-4224-93c1-916afc845dad-builder-dockercfg-gqxxs-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.580111 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b4ba4a5c-0b11-4224-93c1-916afc845dad-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.580229 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.580372 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.580418 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.580468 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b4ba4a5c-0b11-4224-93c1-916afc845dad-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.580621 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/b4ba4a5c-0b11-4224-93c1-916afc845dad-builder-dockercfg-gqxxs-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.580731 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.580778 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.682379 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc2pf\" (UniqueName: \"kubernetes.io/projected/b4ba4a5c-0b11-4224-93c1-916afc845dad-kube-api-access-nc2pf\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.682462 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.682486 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.682507 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/b4ba4a5c-0b11-4224-93c1-916afc845dad-builder-dockercfg-gqxxs-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.682546 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b4ba4a5c-0b11-4224-93c1-916afc845dad-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.682573 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.682611 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.682632 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.682661 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b4ba4a5c-0b11-4224-93c1-916afc845dad-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.682682 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/b4ba4a5c-0b11-4224-93c1-916afc845dad-builder-dockercfg-gqxxs-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.682708 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.682733 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.683278 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.683532 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b4ba4a5c-0b11-4224-93c1-916afc845dad-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.683715 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b4ba4a5c-0b11-4224-93c1-916afc845dad-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.684194 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.684186 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.684605 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.684934 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.685046 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.685241 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.694544 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/b4ba4a5c-0b11-4224-93c1-916afc845dad-builder-dockercfg-gqxxs-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.694695 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/b4ba4a5c-0b11-4224-93c1-916afc845dad-builder-dockercfg-gqxxs-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.705299 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc2pf\" (UniqueName: \"kubernetes.io/projected/b4ba4a5c-0b11-4224-93c1-916afc845dad-kube-api-access-nc2pf\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:27:59 crc kubenswrapper[4824]: I0224 00:27:59.770140 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:28:00 crc kubenswrapper[4824]: I0224 00:28:00.070363 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 24 00:28:00 crc kubenswrapper[4824]: I0224 00:28:00.187041 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"b4ba4a5c-0b11-4224-93c1-916afc845dad","Type":"ContainerStarted","Data":"a0b3e777855e84eda295c0db150df985df6c58f2774ea136065601b5d232eb13"} Feb 24 00:28:01 crc kubenswrapper[4824]: I0224 00:28:01.195875 4824 generic.go:334] "Generic (PLEG): container finished" podID="b4ba4a5c-0b11-4224-93c1-916afc845dad" containerID="3168d2fe985406ad1ee57f8645b40dfea48be1b24606a88cad673ecfbe808feb" exitCode=0 Feb 24 00:28:01 crc kubenswrapper[4824]: I0224 00:28:01.195992 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"b4ba4a5c-0b11-4224-93c1-916afc845dad","Type":"ContainerDied","Data":"3168d2fe985406ad1ee57f8645b40dfea48be1b24606a88cad673ecfbe808feb"} Feb 24 00:28:02 crc kubenswrapper[4824]: I0224 00:28:02.205637 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"b4ba4a5c-0b11-4224-93c1-916afc845dad","Type":"ContainerStarted","Data":"a53ed1fad2fb3c6218dfc006e9192510785f4c7b5539ee92f99bf688c20a7f9b"} Feb 24 00:28:10 crc kubenswrapper[4824]: I0224 00:28:10.554964 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=11.554938831 podStartE2EDuration="11.554938831s" podCreationTimestamp="2026-02-24 00:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:28:02.239202004 +0000 UTC m=+1346.228826493" watchObservedRunningTime="2026-02-24 00:28:10.554938831 +0000 UTC m=+1354.544563310" Feb 24 00:28:10 crc kubenswrapper[4824]: I0224 00:28:10.559715 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 24 00:28:10 crc kubenswrapper[4824]: I0224 00:28:10.560034 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="b4ba4a5c-0b11-4224-93c1-916afc845dad" containerName="docker-build" containerID="cri-o://a53ed1fad2fb3c6218dfc006e9192510785f4c7b5539ee92f99bf688c20a7f9b" gracePeriod=30 Feb 24 00:28:10 crc kubenswrapper[4824]: I0224 00:28:10.979760 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_b4ba4a5c-0b11-4224-93c1-916afc845dad/docker-build/0.log" Feb 24 00:28:10 crc kubenswrapper[4824]: I0224 00:28:10.980472 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.047414 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-system-configs\") pod \"b4ba4a5c-0b11-4224-93c1-916afc845dad\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.048694 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b4ba4a5c-0b11-4224-93c1-916afc845dad" (UID: "b4ba4a5c-0b11-4224-93c1-916afc845dad"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.149190 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b4ba4a5c-0b11-4224-93c1-916afc845dad-node-pullsecrets\") pod \"b4ba4a5c-0b11-4224-93c1-916afc845dad\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.149277 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/b4ba4a5c-0b11-4224-93c1-916afc845dad-builder-dockercfg-gqxxs-pull\") pod \"b4ba4a5c-0b11-4224-93c1-916afc845dad\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.149323 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-proxy-ca-bundles\") pod \"b4ba4a5c-0b11-4224-93c1-916afc845dad\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.149381 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc2pf\" (UniqueName: \"kubernetes.io/projected/b4ba4a5c-0b11-4224-93c1-916afc845dad-kube-api-access-nc2pf\") pod \"b4ba4a5c-0b11-4224-93c1-916afc845dad\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.149363 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4ba4a5c-0b11-4224-93c1-916afc845dad-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b4ba4a5c-0b11-4224-93c1-916afc845dad" (UID: "b4ba4a5c-0b11-4224-93c1-916afc845dad"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.149423 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-blob-cache\") pod \"b4ba4a5c-0b11-4224-93c1-916afc845dad\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.149455 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b4ba4a5c-0b11-4224-93c1-916afc845dad-buildcachedir\") pod \"b4ba4a5c-0b11-4224-93c1-916afc845dad\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.149494 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-container-storage-run\") pod \"b4ba4a5c-0b11-4224-93c1-916afc845dad\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.149571 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-container-storage-root\") pod \"b4ba4a5c-0b11-4224-93c1-916afc845dad\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.149596 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4ba4a5c-0b11-4224-93c1-916afc845dad-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b4ba4a5c-0b11-4224-93c1-916afc845dad" (UID: "b4ba4a5c-0b11-4224-93c1-916afc845dad"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.149630 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-ca-bundles\") pod \"b4ba4a5c-0b11-4224-93c1-916afc845dad\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.149685 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/b4ba4a5c-0b11-4224-93c1-916afc845dad-builder-dockercfg-gqxxs-push\") pod \"b4ba4a5c-0b11-4224-93c1-916afc845dad\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.149753 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-buildworkdir\") pod \"b4ba4a5c-0b11-4224-93c1-916afc845dad\" (UID: \"b4ba4a5c-0b11-4224-93c1-916afc845dad\") " Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.150170 4824 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b4ba4a5c-0b11-4224-93c1-916afc845dad-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.150193 4824 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.150211 4824 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b4ba4a5c-0b11-4224-93c1-916afc845dad-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.155882 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b4ba4a5c-0b11-4224-93c1-916afc845dad" (UID: "b4ba4a5c-0b11-4224-93c1-916afc845dad"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.156731 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b4ba4a5c-0b11-4224-93c1-916afc845dad" (UID: "b4ba4a5c-0b11-4224-93c1-916afc845dad"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.158246 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b4ba4a5c-0b11-4224-93c1-916afc845dad" (UID: "b4ba4a5c-0b11-4224-93c1-916afc845dad"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.158350 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b4ba4a5c-0b11-4224-93c1-916afc845dad" (UID: "b4ba4a5c-0b11-4224-93c1-916afc845dad"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.161434 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4ba4a5c-0b11-4224-93c1-916afc845dad-builder-dockercfg-gqxxs-pull" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-pull") pod "b4ba4a5c-0b11-4224-93c1-916afc845dad" (UID: "b4ba4a5c-0b11-4224-93c1-916afc845dad"). InnerVolumeSpecName "builder-dockercfg-gqxxs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.162319 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4ba4a5c-0b11-4224-93c1-916afc845dad-kube-api-access-nc2pf" (OuterVolumeSpecName: "kube-api-access-nc2pf") pod "b4ba4a5c-0b11-4224-93c1-916afc845dad" (UID: "b4ba4a5c-0b11-4224-93c1-916afc845dad"). InnerVolumeSpecName "kube-api-access-nc2pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.162322 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4ba4a5c-0b11-4224-93c1-916afc845dad-builder-dockercfg-gqxxs-push" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-push") pod "b4ba4a5c-0b11-4224-93c1-916afc845dad" (UID: "b4ba4a5c-0b11-4224-93c1-916afc845dad"). InnerVolumeSpecName "builder-dockercfg-gqxxs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.250693 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/b4ba4a5c-0b11-4224-93c1-916afc845dad-builder-dockercfg-gqxxs-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.250735 4824 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.250747 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc2pf\" (UniqueName: \"kubernetes.io/projected/b4ba4a5c-0b11-4224-93c1-916afc845dad-kube-api-access-nc2pf\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.250758 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.250771 4824 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.250783 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/b4ba4a5c-0b11-4224-93c1-916afc845dad-builder-dockercfg-gqxxs-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.250794 4824 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.254307 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b4ba4a5c-0b11-4224-93c1-916afc845dad" (UID: "b4ba4a5c-0b11-4224-93c1-916afc845dad"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.285018 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_b4ba4a5c-0b11-4224-93c1-916afc845dad/docker-build/0.log" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.286614 4824 generic.go:334] "Generic (PLEG): container finished" podID="b4ba4a5c-0b11-4224-93c1-916afc845dad" containerID="a53ed1fad2fb3c6218dfc006e9192510785f4c7b5539ee92f99bf688c20a7f9b" exitCode=1 Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.286656 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"b4ba4a5c-0b11-4224-93c1-916afc845dad","Type":"ContainerDied","Data":"a53ed1fad2fb3c6218dfc006e9192510785f4c7b5539ee92f99bf688c20a7f9b"} Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.286692 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"b4ba4a5c-0b11-4224-93c1-916afc845dad","Type":"ContainerDied","Data":"a0b3e777855e84eda295c0db150df985df6c58f2774ea136065601b5d232eb13"} Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.286719 4824 scope.go:117] "RemoveContainer" containerID="a53ed1fad2fb3c6218dfc006e9192510785f4c7b5539ee92f99bf688c20a7f9b" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.286879 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.312551 4824 scope.go:117] "RemoveContainer" containerID="3168d2fe985406ad1ee57f8645b40dfea48be1b24606a88cad673ecfbe808feb" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.340063 4824 scope.go:117] "RemoveContainer" containerID="a53ed1fad2fb3c6218dfc006e9192510785f4c7b5539ee92f99bf688c20a7f9b" Feb 24 00:28:11 crc kubenswrapper[4824]: E0224 00:28:11.340547 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a53ed1fad2fb3c6218dfc006e9192510785f4c7b5539ee92f99bf688c20a7f9b\": container with ID starting with a53ed1fad2fb3c6218dfc006e9192510785f4c7b5539ee92f99bf688c20a7f9b not found: ID does not exist" containerID="a53ed1fad2fb3c6218dfc006e9192510785f4c7b5539ee92f99bf688c20a7f9b" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.340607 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53ed1fad2fb3c6218dfc006e9192510785f4c7b5539ee92f99bf688c20a7f9b"} err="failed to get container status \"a53ed1fad2fb3c6218dfc006e9192510785f4c7b5539ee92f99bf688c20a7f9b\": rpc error: code = NotFound desc = could not find container \"a53ed1fad2fb3c6218dfc006e9192510785f4c7b5539ee92f99bf688c20a7f9b\": container with ID starting with a53ed1fad2fb3c6218dfc006e9192510785f4c7b5539ee92f99bf688c20a7f9b not found: ID does not exist" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.340648 4824 scope.go:117] "RemoveContainer" containerID="3168d2fe985406ad1ee57f8645b40dfea48be1b24606a88cad673ecfbe808feb" Feb 24 00:28:11 crc kubenswrapper[4824]: E0224 00:28:11.341020 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3168d2fe985406ad1ee57f8645b40dfea48be1b24606a88cad673ecfbe808feb\": container with ID starting with 3168d2fe985406ad1ee57f8645b40dfea48be1b24606a88cad673ecfbe808feb not found: ID does not exist" containerID="3168d2fe985406ad1ee57f8645b40dfea48be1b24606a88cad673ecfbe808feb" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.341061 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3168d2fe985406ad1ee57f8645b40dfea48be1b24606a88cad673ecfbe808feb"} err="failed to get container status \"3168d2fe985406ad1ee57f8645b40dfea48be1b24606a88cad673ecfbe808feb\": rpc error: code = NotFound desc = could not find container \"3168d2fe985406ad1ee57f8645b40dfea48be1b24606a88cad673ecfbe808feb\": container with ID starting with 3168d2fe985406ad1ee57f8645b40dfea48be1b24606a88cad673ecfbe808feb not found: ID does not exist" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.351876 4824 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.540211 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b4ba4a5c-0b11-4224-93c1-916afc845dad" (UID: "b4ba4a5c-0b11-4224-93c1-916afc845dad"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.555106 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b4ba4a5c-0b11-4224-93c1-916afc845dad-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.624956 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 24 00:28:11 crc kubenswrapper[4824]: I0224 00:28:11.632058 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.369220 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Feb 24 00:28:12 crc kubenswrapper[4824]: E0224 00:28:12.369575 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4ba4a5c-0b11-4224-93c1-916afc845dad" containerName="manage-dockerfile" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.369595 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ba4a5c-0b11-4224-93c1-916afc845dad" containerName="manage-dockerfile" Feb 24 00:28:12 crc kubenswrapper[4824]: E0224 00:28:12.369627 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4ba4a5c-0b11-4224-93c1-916afc845dad" containerName="docker-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.369639 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ba4a5c-0b11-4224-93c1-916afc845dad" containerName="docker-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.369817 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4ba4a5c-0b11-4224-93c1-916afc845dad" containerName="docker-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.371234 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.374098 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-global-ca" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.374332 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-gqxxs" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.374379 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-ca" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.375238 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-sys-config" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.393723 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.469257 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.469394 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.469437 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2502d667-d99a-44c3-9d90-297fa992415f-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.469506 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/2502d667-d99a-44c3-9d90-297fa992415f-builder-dockercfg-gqxxs-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.469582 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2502d667-d99a-44c3-9d90-297fa992415f-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.469626 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.469675 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2502d667-d99a-44c3-9d90-297fa992415f-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.469729 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2502d667-d99a-44c3-9d90-297fa992415f-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.469778 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/2502d667-d99a-44c3-9d90-297fa992415f-builder-dockercfg-gqxxs-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.469878 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.469927 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2502d667-d99a-44c3-9d90-297fa992415f-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.469993 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42hgn\" (UniqueName: \"kubernetes.io/projected/2502d667-d99a-44c3-9d90-297fa992415f-kube-api-access-42hgn\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.570641 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/2502d667-d99a-44c3-9d90-297fa992415f-builder-dockercfg-gqxxs-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.570729 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2502d667-d99a-44c3-9d90-297fa992415f-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.570762 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.570789 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2502d667-d99a-44c3-9d90-297fa992415f-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.570844 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2502d667-d99a-44c3-9d90-297fa992415f-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.570878 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/2502d667-d99a-44c3-9d90-297fa992415f-builder-dockercfg-gqxxs-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.570922 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.571029 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2502d667-d99a-44c3-9d90-297fa992415f-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.571515 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42hgn\" (UniqueName: \"kubernetes.io/projected/2502d667-d99a-44c3-9d90-297fa992415f-kube-api-access-42hgn\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.571584 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.571609 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.571631 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2502d667-d99a-44c3-9d90-297fa992415f-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.571690 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.571712 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2502d667-d99a-44c3-9d90-297fa992415f-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.571497 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2502d667-d99a-44c3-9d90-297fa992415f-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.571889 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.572061 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2502d667-d99a-44c3-9d90-297fa992415f-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.572101 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.572136 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2502d667-d99a-44c3-9d90-297fa992415f-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.572151 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2502d667-d99a-44c3-9d90-297fa992415f-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.572237 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.575919 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/2502d667-d99a-44c3-9d90-297fa992415f-builder-dockercfg-gqxxs-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.575929 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/2502d667-d99a-44c3-9d90-297fa992415f-builder-dockercfg-gqxxs-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.598030 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42hgn\" (UniqueName: \"kubernetes.io/projected/2502d667-d99a-44c3-9d90-297fa992415f-kube-api-access-42hgn\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.693342 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.702174 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4ba4a5c-0b11-4224-93c1-916afc845dad" path="/var/lib/kubelet/pods/b4ba4a5c-0b11-4224-93c1-916afc845dad/volumes" Feb 24 00:28:12 crc kubenswrapper[4824]: I0224 00:28:12.907399 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Feb 24 00:28:13 crc kubenswrapper[4824]: I0224 00:28:13.305110 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"2502d667-d99a-44c3-9d90-297fa992415f","Type":"ContainerStarted","Data":"83459e53dc9e550b588f73c15cde8e7449e5987cd6bdc17ae8e1b6e921e61f0c"} Feb 24 00:28:13 crc kubenswrapper[4824]: I0224 00:28:13.305463 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"2502d667-d99a-44c3-9d90-297fa992415f","Type":"ContainerStarted","Data":"7f43c818a6b6b7415752e417bc61876b49ff2671f72718851273b1acc4e80670"} Feb 24 00:28:14 crc kubenswrapper[4824]: I0224 00:28:14.318631 4824 generic.go:334] "Generic (PLEG): container finished" podID="2502d667-d99a-44c3-9d90-297fa992415f" containerID="83459e53dc9e550b588f73c15cde8e7449e5987cd6bdc17ae8e1b6e921e61f0c" exitCode=0 Feb 24 00:28:14 crc kubenswrapper[4824]: I0224 00:28:14.318710 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"2502d667-d99a-44c3-9d90-297fa992415f","Type":"ContainerDied","Data":"83459e53dc9e550b588f73c15cde8e7449e5987cd6bdc17ae8e1b6e921e61f0c"} Feb 24 00:28:15 crc kubenswrapper[4824]: I0224 00:28:15.327197 4824 generic.go:334] "Generic (PLEG): container finished" podID="2502d667-d99a-44c3-9d90-297fa992415f" containerID="cd7d2bcb10a00e2cffb8c401cf1245e43d358c4602903a675a36eb052d31ade4" exitCode=0 Feb 24 00:28:15 crc kubenswrapper[4824]: I0224 00:28:15.327251 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"2502d667-d99a-44c3-9d90-297fa992415f","Type":"ContainerDied","Data":"cd7d2bcb10a00e2cffb8c401cf1245e43d358c4602903a675a36eb052d31ade4"} Feb 24 00:28:15 crc kubenswrapper[4824]: I0224 00:28:15.363056 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_2502d667-d99a-44c3-9d90-297fa992415f/manage-dockerfile/0.log" Feb 24 00:28:16 crc kubenswrapper[4824]: I0224 00:28:16.337530 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"2502d667-d99a-44c3-9d90-297fa992415f","Type":"ContainerStarted","Data":"62efab338768a2b789db26b97a99b02cd2e06c98366a896fcb14f9e0c6b5a304"} Feb 24 00:28:23 crc kubenswrapper[4824]: I0224 00:28:23.275940 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:28:23 crc kubenswrapper[4824]: I0224 00:28:23.276820 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:28:53 crc kubenswrapper[4824]: I0224 00:28:53.276118 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:28:53 crc kubenswrapper[4824]: I0224 00:28:53.277190 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:28:53 crc kubenswrapper[4824]: I0224 00:28:53.277256 4824 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:28:53 crc kubenswrapper[4824]: I0224 00:28:53.278231 4824 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d819df51f5c54106cb947aecba467be6a7835d606611afb3e6526ac4d026f80"} pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:28:53 crc kubenswrapper[4824]: I0224 00:28:53.278306 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" containerID="cri-o://8d819df51f5c54106cb947aecba467be6a7835d606611afb3e6526ac4d026f80" gracePeriod=600 Feb 24 00:28:53 crc kubenswrapper[4824]: I0224 00:28:53.636974 4824 generic.go:334] "Generic (PLEG): container finished" podID="939ca085-9383-42e6-b7d6-37f101137273" containerID="8d819df51f5c54106cb947aecba467be6a7835d606611afb3e6526ac4d026f80" exitCode=0 Feb 24 00:28:53 crc kubenswrapper[4824]: I0224 00:28:53.637714 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerDied","Data":"8d819df51f5c54106cb947aecba467be6a7835d606611afb3e6526ac4d026f80"} Feb 24 00:28:53 crc kubenswrapper[4824]: I0224 00:28:53.637757 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerStarted","Data":"1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364"} Feb 24 00:28:53 crc kubenswrapper[4824]: I0224 00:28:53.637779 4824 scope.go:117] "RemoveContainer" containerID="32f31702ad77be87a49a0e3d023914422f1fbe192b728a29ed31dacaa99cc4eb" Feb 24 00:28:53 crc kubenswrapper[4824]: I0224 00:28:53.669040 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=41.669018191 podStartE2EDuration="41.669018191s" podCreationTimestamp="2026-02-24 00:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:28:16.363793021 +0000 UTC m=+1360.353417510" watchObservedRunningTime="2026-02-24 00:28:53.669018191 +0000 UTC m=+1397.658642650" Feb 24 00:29:19 crc kubenswrapper[4824]: I0224 00:29:19.834041 4824 generic.go:334] "Generic (PLEG): container finished" podID="2502d667-d99a-44c3-9d90-297fa992415f" containerID="62efab338768a2b789db26b97a99b02cd2e06c98366a896fcb14f9e0c6b5a304" exitCode=0 Feb 24 00:29:19 crc kubenswrapper[4824]: I0224 00:29:19.834274 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"2502d667-d99a-44c3-9d90-297fa992415f","Type":"ContainerDied","Data":"62efab338768a2b789db26b97a99b02cd2e06c98366a896fcb14f9e0c6b5a304"} Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.153683 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.195469 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2502d667-d99a-44c3-9d90-297fa992415f-buildcachedir\") pod \"2502d667-d99a-44c3-9d90-297fa992415f\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.195613 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-build-blob-cache\") pod \"2502d667-d99a-44c3-9d90-297fa992415f\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.195675 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42hgn\" (UniqueName: \"kubernetes.io/projected/2502d667-d99a-44c3-9d90-297fa992415f-kube-api-access-42hgn\") pod \"2502d667-d99a-44c3-9d90-297fa992415f\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.195724 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2502d667-d99a-44c3-9d90-297fa992415f-build-ca-bundles\") pod \"2502d667-d99a-44c3-9d90-297fa992415f\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.195750 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-container-storage-root\") pod \"2502d667-d99a-44c3-9d90-297fa992415f\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.195774 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/2502d667-d99a-44c3-9d90-297fa992415f-builder-dockercfg-gqxxs-push\") pod \"2502d667-d99a-44c3-9d90-297fa992415f\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.195811 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2502d667-d99a-44c3-9d90-297fa992415f-build-proxy-ca-bundles\") pod \"2502d667-d99a-44c3-9d90-297fa992415f\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.195855 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-container-storage-run\") pod \"2502d667-d99a-44c3-9d90-297fa992415f\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.195871 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2502d667-d99a-44c3-9d90-297fa992415f-build-system-configs\") pod \"2502d667-d99a-44c3-9d90-297fa992415f\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.195892 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-buildworkdir\") pod \"2502d667-d99a-44c3-9d90-297fa992415f\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.195920 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2502d667-d99a-44c3-9d90-297fa992415f-node-pullsecrets\") pod \"2502d667-d99a-44c3-9d90-297fa992415f\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.196004 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/2502d667-d99a-44c3-9d90-297fa992415f-builder-dockercfg-gqxxs-pull\") pod \"2502d667-d99a-44c3-9d90-297fa992415f\" (UID: \"2502d667-d99a-44c3-9d90-297fa992415f\") " Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.197335 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2502d667-d99a-44c3-9d90-297fa992415f-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "2502d667-d99a-44c3-9d90-297fa992415f" (UID: "2502d667-d99a-44c3-9d90-297fa992415f"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.197385 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2502d667-d99a-44c3-9d90-297fa992415f-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "2502d667-d99a-44c3-9d90-297fa992415f" (UID: "2502d667-d99a-44c3-9d90-297fa992415f"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.198255 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2502d667-d99a-44c3-9d90-297fa992415f-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "2502d667-d99a-44c3-9d90-297fa992415f" (UID: "2502d667-d99a-44c3-9d90-297fa992415f"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.198346 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2502d667-d99a-44c3-9d90-297fa992415f-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "2502d667-d99a-44c3-9d90-297fa992415f" (UID: "2502d667-d99a-44c3-9d90-297fa992415f"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.199231 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "2502d667-d99a-44c3-9d90-297fa992415f" (UID: "2502d667-d99a-44c3-9d90-297fa992415f"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.199394 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2502d667-d99a-44c3-9d90-297fa992415f-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "2502d667-d99a-44c3-9d90-297fa992415f" (UID: "2502d667-d99a-44c3-9d90-297fa992415f"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.202186 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "2502d667-d99a-44c3-9d90-297fa992415f" (UID: "2502d667-d99a-44c3-9d90-297fa992415f"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.205767 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2502d667-d99a-44c3-9d90-297fa992415f-builder-dockercfg-gqxxs-pull" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-pull") pod "2502d667-d99a-44c3-9d90-297fa992415f" (UID: "2502d667-d99a-44c3-9d90-297fa992415f"). InnerVolumeSpecName "builder-dockercfg-gqxxs-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.205786 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2502d667-d99a-44c3-9d90-297fa992415f-builder-dockercfg-gqxxs-push" (OuterVolumeSpecName: "builder-dockercfg-gqxxs-push") pod "2502d667-d99a-44c3-9d90-297fa992415f" (UID: "2502d667-d99a-44c3-9d90-297fa992415f"). InnerVolumeSpecName "builder-dockercfg-gqxxs-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.206135 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2502d667-d99a-44c3-9d90-297fa992415f-kube-api-access-42hgn" (OuterVolumeSpecName: "kube-api-access-42hgn") pod "2502d667-d99a-44c3-9d90-297fa992415f" (UID: "2502d667-d99a-44c3-9d90-297fa992415f"). InnerVolumeSpecName "kube-api-access-42hgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.306161 4824 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/2502d667-d99a-44c3-9d90-297fa992415f-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.306223 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42hgn\" (UniqueName: \"kubernetes.io/projected/2502d667-d99a-44c3-9d90-297fa992415f-kube-api-access-42hgn\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.306237 4824 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2502d667-d99a-44c3-9d90-297fa992415f-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.306265 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-push\" (UniqueName: \"kubernetes.io/secret/2502d667-d99a-44c3-9d90-297fa992415f-builder-dockercfg-gqxxs-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.306279 4824 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2502d667-d99a-44c3-9d90-297fa992415f-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.306291 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.306305 4824 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/2502d667-d99a-44c3-9d90-297fa992415f-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.306319 4824 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.306349 4824 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2502d667-d99a-44c3-9d90-297fa992415f-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.306362 4824 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-gqxxs-pull\" (UniqueName: \"kubernetes.io/secret/2502d667-d99a-44c3-9d90-297fa992415f-builder-dockercfg-gqxxs-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.332904 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "2502d667-d99a-44c3-9d90-297fa992415f" (UID: "2502d667-d99a-44c3-9d90-297fa992415f"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.407835 4824 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.853395 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"2502d667-d99a-44c3-9d90-297fa992415f","Type":"ContainerDied","Data":"7f43c818a6b6b7415752e417bc61876b49ff2671f72718851273b1acc4e80670"} Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.854410 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f43c818a6b6b7415752e417bc61876b49ff2671f72718851273b1acc4e80670" Feb 24 00:29:21 crc kubenswrapper[4824]: I0224 00:29:21.853482 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 24 00:29:22 crc kubenswrapper[4824]: I0224 00:29:22.097764 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "2502d667-d99a-44c3-9d90-297fa992415f" (UID: "2502d667-d99a-44c3-9d90-297fa992415f"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:29:22 crc kubenswrapper[4824]: I0224 00:29:22.119667 4824 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/2502d667-d99a-44c3-9d90-297fa992415f-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:29:27 crc kubenswrapper[4824]: I0224 00:29:27.894671 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-755b8777c-j59cx"] Feb 24 00:29:27 crc kubenswrapper[4824]: E0224 00:29:27.895202 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2502d667-d99a-44c3-9d90-297fa992415f" containerName="git-clone" Feb 24 00:29:27 crc kubenswrapper[4824]: I0224 00:29:27.895215 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="2502d667-d99a-44c3-9d90-297fa992415f" containerName="git-clone" Feb 24 00:29:27 crc kubenswrapper[4824]: E0224 00:29:27.895227 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2502d667-d99a-44c3-9d90-297fa992415f" containerName="manage-dockerfile" Feb 24 00:29:27 crc kubenswrapper[4824]: I0224 00:29:27.895233 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="2502d667-d99a-44c3-9d90-297fa992415f" containerName="manage-dockerfile" Feb 24 00:29:27 crc kubenswrapper[4824]: E0224 00:29:27.895246 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2502d667-d99a-44c3-9d90-297fa992415f" containerName="docker-build" Feb 24 00:29:27 crc kubenswrapper[4824]: I0224 00:29:27.895252 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="2502d667-d99a-44c3-9d90-297fa992415f" containerName="docker-build" Feb 24 00:29:27 crc kubenswrapper[4824]: I0224 00:29:27.895378 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="2502d667-d99a-44c3-9d90-297fa992415f" containerName="docker-build" Feb 24 00:29:27 crc kubenswrapper[4824]: I0224 00:29:27.895904 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-755b8777c-j59cx" Feb 24 00:29:27 crc kubenswrapper[4824]: I0224 00:29:27.898412 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-46ncq" Feb 24 00:29:27 crc kubenswrapper[4824]: I0224 00:29:27.954406 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-755b8777c-j59cx"] Feb 24 00:29:28 crc kubenswrapper[4824]: I0224 00:29:28.006548 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58b9k\" (UniqueName: \"kubernetes.io/projected/99d102db-b6a5-428f-acec-1311a225325d-kube-api-access-58b9k\") pod \"smart-gateway-operator-755b8777c-j59cx\" (UID: \"99d102db-b6a5-428f-acec-1311a225325d\") " pod="service-telemetry/smart-gateway-operator-755b8777c-j59cx" Feb 24 00:29:28 crc kubenswrapper[4824]: I0224 00:29:28.006660 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/99d102db-b6a5-428f-acec-1311a225325d-runner\") pod \"smart-gateway-operator-755b8777c-j59cx\" (UID: \"99d102db-b6a5-428f-acec-1311a225325d\") " pod="service-telemetry/smart-gateway-operator-755b8777c-j59cx" Feb 24 00:29:28 crc kubenswrapper[4824]: I0224 00:29:28.107980 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58b9k\" (UniqueName: \"kubernetes.io/projected/99d102db-b6a5-428f-acec-1311a225325d-kube-api-access-58b9k\") pod \"smart-gateway-operator-755b8777c-j59cx\" (UID: \"99d102db-b6a5-428f-acec-1311a225325d\") " pod="service-telemetry/smart-gateway-operator-755b8777c-j59cx" Feb 24 00:29:28 crc kubenswrapper[4824]: I0224 00:29:28.108057 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/99d102db-b6a5-428f-acec-1311a225325d-runner\") pod \"smart-gateway-operator-755b8777c-j59cx\" (UID: \"99d102db-b6a5-428f-acec-1311a225325d\") " pod="service-telemetry/smart-gateway-operator-755b8777c-j59cx" Feb 24 00:29:28 crc kubenswrapper[4824]: I0224 00:29:28.108510 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/99d102db-b6a5-428f-acec-1311a225325d-runner\") pod \"smart-gateway-operator-755b8777c-j59cx\" (UID: \"99d102db-b6a5-428f-acec-1311a225325d\") " pod="service-telemetry/smart-gateway-operator-755b8777c-j59cx" Feb 24 00:29:28 crc kubenswrapper[4824]: I0224 00:29:28.134885 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58b9k\" (UniqueName: \"kubernetes.io/projected/99d102db-b6a5-428f-acec-1311a225325d-kube-api-access-58b9k\") pod \"smart-gateway-operator-755b8777c-j59cx\" (UID: \"99d102db-b6a5-428f-acec-1311a225325d\") " pod="service-telemetry/smart-gateway-operator-755b8777c-j59cx" Feb 24 00:29:28 crc kubenswrapper[4824]: I0224 00:29:28.227736 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-755b8777c-j59cx" Feb 24 00:29:28 crc kubenswrapper[4824]: I0224 00:29:28.437987 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-755b8777c-j59cx"] Feb 24 00:29:28 crc kubenswrapper[4824]: I0224 00:29:28.440117 4824 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 00:29:28 crc kubenswrapper[4824]: I0224 00:29:28.903220 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-755b8777c-j59cx" event={"ID":"99d102db-b6a5-428f-acec-1311a225325d","Type":"ContainerStarted","Data":"76b31fe4fe3a776a331527f9c94ab0067e4ef9a961f001cd8d79b9f195a4e6d3"} Feb 24 00:29:33 crc kubenswrapper[4824]: I0224 00:29:33.475770 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-7f7c584b79-2rbxz"] Feb 24 00:29:33 crc kubenswrapper[4824]: I0224 00:29:33.484312 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-7f7c584b79-2rbxz" Feb 24 00:29:33 crc kubenswrapper[4824]: I0224 00:29:33.487195 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-xkzqf" Feb 24 00:29:33 crc kubenswrapper[4824]: I0224 00:29:33.526496 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-7f7c584b79-2rbxz"] Feb 24 00:29:33 crc kubenswrapper[4824]: I0224 00:29:33.607645 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vspqc\" (UniqueName: \"kubernetes.io/projected/3394aaea-7658-498b-aab1-7494fb832c8f-kube-api-access-vspqc\") pod \"service-telemetry-operator-7f7c584b79-2rbxz\" (UID: \"3394aaea-7658-498b-aab1-7494fb832c8f\") " pod="service-telemetry/service-telemetry-operator-7f7c584b79-2rbxz" Feb 24 00:29:33 crc kubenswrapper[4824]: I0224 00:29:33.607771 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/3394aaea-7658-498b-aab1-7494fb832c8f-runner\") pod \"service-telemetry-operator-7f7c584b79-2rbxz\" (UID: \"3394aaea-7658-498b-aab1-7494fb832c8f\") " pod="service-telemetry/service-telemetry-operator-7f7c584b79-2rbxz" Feb 24 00:29:33 crc kubenswrapper[4824]: I0224 00:29:33.709546 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vspqc\" (UniqueName: \"kubernetes.io/projected/3394aaea-7658-498b-aab1-7494fb832c8f-kube-api-access-vspqc\") pod \"service-telemetry-operator-7f7c584b79-2rbxz\" (UID: \"3394aaea-7658-498b-aab1-7494fb832c8f\") " pod="service-telemetry/service-telemetry-operator-7f7c584b79-2rbxz" Feb 24 00:29:33 crc kubenswrapper[4824]: I0224 00:29:33.709609 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/3394aaea-7658-498b-aab1-7494fb832c8f-runner\") pod \"service-telemetry-operator-7f7c584b79-2rbxz\" (UID: \"3394aaea-7658-498b-aab1-7494fb832c8f\") " pod="service-telemetry/service-telemetry-operator-7f7c584b79-2rbxz" Feb 24 00:29:33 crc kubenswrapper[4824]: I0224 00:29:33.710017 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/3394aaea-7658-498b-aab1-7494fb832c8f-runner\") pod \"service-telemetry-operator-7f7c584b79-2rbxz\" (UID: \"3394aaea-7658-498b-aab1-7494fb832c8f\") " pod="service-telemetry/service-telemetry-operator-7f7c584b79-2rbxz" Feb 24 00:29:33 crc kubenswrapper[4824]: I0224 00:29:33.733858 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vspqc\" (UniqueName: \"kubernetes.io/projected/3394aaea-7658-498b-aab1-7494fb832c8f-kube-api-access-vspqc\") pod \"service-telemetry-operator-7f7c584b79-2rbxz\" (UID: \"3394aaea-7658-498b-aab1-7494fb832c8f\") " pod="service-telemetry/service-telemetry-operator-7f7c584b79-2rbxz" Feb 24 00:29:33 crc kubenswrapper[4824]: I0224 00:29:33.821709 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-7f7c584b79-2rbxz" Feb 24 00:29:40 crc kubenswrapper[4824]: I0224 00:29:40.195155 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-7f7c584b79-2rbxz"] Feb 24 00:29:42 crc kubenswrapper[4824]: W0224 00:29:42.904042 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3394aaea_7658_498b_aab1_7494fb832c8f.slice/crio-4f29de758e19b070405b2537ae2ab76fb433f996ec903bb1877f9209bf0a929b WatchSource:0}: Error finding container 4f29de758e19b070405b2537ae2ab76fb433f996ec903bb1877f9209bf0a929b: Status 404 returned error can't find the container with id 4f29de758e19b070405b2537ae2ab76fb433f996ec903bb1877f9209bf0a929b Feb 24 00:29:43 crc kubenswrapper[4824]: I0224 00:29:43.040681 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-7f7c584b79-2rbxz" event={"ID":"3394aaea-7658-498b-aab1-7494fb832c8f","Type":"ContainerStarted","Data":"4f29de758e19b070405b2537ae2ab76fb433f996ec903bb1877f9209bf0a929b"} Feb 24 00:29:46 crc kubenswrapper[4824]: E0224 00:29:46.313406 4824 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/smart-gateway-operator:latest" Feb 24 00:29:46 crc kubenswrapper[4824]: E0224 00:29:46.314339 4824 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/smart-gateway-operator:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:smart-gateway-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:ANSIBLE_VERBOSITY_SMARTGATEWAY_SMARTGATEWAY_INFRA_WATCH,Value:4,ValueFrom:nil,},EnvVar{Name:ANSIBLE_DEBUG_LOGS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CORE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BRIDGE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-bridge:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:smart-gateway-operator.v5.0.1771892962,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-58b9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod smart-gateway-operator-755b8777c-j59cx_service-telemetry(99d102db-b6a5-428f-acec-1311a225325d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 00:29:46 crc kubenswrapper[4824]: E0224 00:29:46.315726 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/smart-gateway-operator-755b8777c-j59cx" podUID="99d102db-b6a5-428f-acec-1311a225325d" Feb 24 00:29:47 crc kubenswrapper[4824]: E0224 00:29:47.088834 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/smart-gateway-operator:latest\\\"\"" pod="service-telemetry/smart-gateway-operator-755b8777c-j59cx" podUID="99d102db-b6a5-428f-acec-1311a225325d" Feb 24 00:29:52 crc kubenswrapper[4824]: I0224 00:29:52.111454 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-7f7c584b79-2rbxz" event={"ID":"3394aaea-7658-498b-aab1-7494fb832c8f","Type":"ContainerStarted","Data":"7a02a539e07725635599d64121b36d64e50cd14340ba35cb917417835c176e3a"} Feb 24 00:29:52 crc kubenswrapper[4824]: I0224 00:29:52.146418 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-7f7c584b79-2rbxz" podStartSLOduration=10.881463113 podStartE2EDuration="19.146390696s" podCreationTimestamp="2026-02-24 00:29:33 +0000 UTC" firstStartedPulling="2026-02-24 00:29:42.908967233 +0000 UTC m=+1446.898591702" lastFinishedPulling="2026-02-24 00:29:51.173894816 +0000 UTC m=+1455.163519285" observedRunningTime="2026-02-24 00:29:52.129157348 +0000 UTC m=+1456.118781817" watchObservedRunningTime="2026-02-24 00:29:52.146390696 +0000 UTC m=+1456.136015165" Feb 24 00:30:00 crc kubenswrapper[4824]: I0224 00:30:00.156865 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9"] Feb 24 00:30:00 crc kubenswrapper[4824]: I0224 00:30:00.158352 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9" Feb 24 00:30:00 crc kubenswrapper[4824]: I0224 00:30:00.162325 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 00:30:00 crc kubenswrapper[4824]: I0224 00:30:00.164423 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 00:30:00 crc kubenswrapper[4824]: I0224 00:30:00.174472 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9"] Feb 24 00:30:00 crc kubenswrapper[4824]: I0224 00:30:00.228948 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbwb6\" (UniqueName: \"kubernetes.io/projected/46e264d0-74a1-43c2-8e9e-b66bf6a0ce87-kube-api-access-gbwb6\") pod \"collect-profiles-29531550-fz9x9\" (UID: \"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9" Feb 24 00:30:00 crc kubenswrapper[4824]: I0224 00:30:00.229025 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46e264d0-74a1-43c2-8e9e-b66bf6a0ce87-config-volume\") pod \"collect-profiles-29531550-fz9x9\" (UID: \"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9" Feb 24 00:30:00 crc kubenswrapper[4824]: I0224 00:30:00.229384 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46e264d0-74a1-43c2-8e9e-b66bf6a0ce87-secret-volume\") pod \"collect-profiles-29531550-fz9x9\" (UID: \"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9" Feb 24 00:30:00 crc kubenswrapper[4824]: I0224 00:30:00.330772 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46e264d0-74a1-43c2-8e9e-b66bf6a0ce87-secret-volume\") pod \"collect-profiles-29531550-fz9x9\" (UID: \"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9" Feb 24 00:30:00 crc kubenswrapper[4824]: I0224 00:30:00.330857 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbwb6\" (UniqueName: \"kubernetes.io/projected/46e264d0-74a1-43c2-8e9e-b66bf6a0ce87-kube-api-access-gbwb6\") pod \"collect-profiles-29531550-fz9x9\" (UID: \"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9" Feb 24 00:30:00 crc kubenswrapper[4824]: I0224 00:30:00.330903 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46e264d0-74a1-43c2-8e9e-b66bf6a0ce87-config-volume\") pod \"collect-profiles-29531550-fz9x9\" (UID: \"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9" Feb 24 00:30:00 crc kubenswrapper[4824]: I0224 00:30:00.331993 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46e264d0-74a1-43c2-8e9e-b66bf6a0ce87-config-volume\") pod \"collect-profiles-29531550-fz9x9\" (UID: \"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9" Feb 24 00:30:00 crc kubenswrapper[4824]: I0224 00:30:00.340493 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46e264d0-74a1-43c2-8e9e-b66bf6a0ce87-secret-volume\") pod \"collect-profiles-29531550-fz9x9\" (UID: \"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9" Feb 24 00:30:00 crc kubenswrapper[4824]: I0224 00:30:00.350213 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbwb6\" (UniqueName: \"kubernetes.io/projected/46e264d0-74a1-43c2-8e9e-b66bf6a0ce87-kube-api-access-gbwb6\") pod \"collect-profiles-29531550-fz9x9\" (UID: \"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9" Feb 24 00:30:00 crc kubenswrapper[4824]: I0224 00:30:00.482121 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9" Feb 24 00:30:00 crc kubenswrapper[4824]: I0224 00:30:00.890216 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9"] Feb 24 00:30:00 crc kubenswrapper[4824]: W0224 00:30:00.898733 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46e264d0_74a1_43c2_8e9e_b66bf6a0ce87.slice/crio-0c32cd2f0d416c552424fda5d5b2d830dbf1d98715059a7c909099f2008ffb45 WatchSource:0}: Error finding container 0c32cd2f0d416c552424fda5d5b2d830dbf1d98715059a7c909099f2008ffb45: Status 404 returned error can't find the container with id 0c32cd2f0d416c552424fda5d5b2d830dbf1d98715059a7c909099f2008ffb45 Feb 24 00:30:01 crc kubenswrapper[4824]: I0224 00:30:01.177312 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9" event={"ID":"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87","Type":"ContainerStarted","Data":"e75a1f2ea58f5e7d7f03dabeedf0d261d871be3d42322aec43c9fe4075fab698"} Feb 24 00:30:01 crc kubenswrapper[4824]: I0224 00:30:01.177813 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9" event={"ID":"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87","Type":"ContainerStarted","Data":"0c32cd2f0d416c552424fda5d5b2d830dbf1d98715059a7c909099f2008ffb45"} Feb 24 00:30:02 crc kubenswrapper[4824]: I0224 00:30:02.185416 4824 generic.go:334] "Generic (PLEG): container finished" podID="46e264d0-74a1-43c2-8e9e-b66bf6a0ce87" containerID="e75a1f2ea58f5e7d7f03dabeedf0d261d871be3d42322aec43c9fe4075fab698" exitCode=0 Feb 24 00:30:02 crc kubenswrapper[4824]: I0224 00:30:02.185492 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9" event={"ID":"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87","Type":"ContainerDied","Data":"e75a1f2ea58f5e7d7f03dabeedf0d261d871be3d42322aec43c9fe4075fab698"} Feb 24 00:30:02 crc kubenswrapper[4824]: I0224 00:30:02.187315 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-755b8777c-j59cx" event={"ID":"99d102db-b6a5-428f-acec-1311a225325d","Type":"ContainerStarted","Data":"26d94a9877cc40e64a79c3e06105e642b8b1ead1d8012838afe9b3fc9455b9d6"} Feb 24 00:30:02 crc kubenswrapper[4824]: I0224 00:30:02.233795 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-755b8777c-j59cx" podStartSLOduration=2.265260526 podStartE2EDuration="35.233773274s" podCreationTimestamp="2026-02-24 00:29:27 +0000 UTC" firstStartedPulling="2026-02-24 00:29:28.439594314 +0000 UTC m=+1432.429218783" lastFinishedPulling="2026-02-24 00:30:01.408107072 +0000 UTC m=+1465.397731531" observedRunningTime="2026-02-24 00:30:02.22840405 +0000 UTC m=+1466.218028529" watchObservedRunningTime="2026-02-24 00:30:02.233773274 +0000 UTC m=+1466.223397743" Feb 24 00:30:03 crc kubenswrapper[4824]: I0224 00:30:03.451535 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9" Feb 24 00:30:03 crc kubenswrapper[4824]: I0224 00:30:03.482916 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbwb6\" (UniqueName: \"kubernetes.io/projected/46e264d0-74a1-43c2-8e9e-b66bf6a0ce87-kube-api-access-gbwb6\") pod \"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87\" (UID: \"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87\") " Feb 24 00:30:03 crc kubenswrapper[4824]: I0224 00:30:03.482988 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46e264d0-74a1-43c2-8e9e-b66bf6a0ce87-config-volume\") pod \"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87\" (UID: \"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87\") " Feb 24 00:30:03 crc kubenswrapper[4824]: I0224 00:30:03.483036 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46e264d0-74a1-43c2-8e9e-b66bf6a0ce87-secret-volume\") pod \"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87\" (UID: \"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87\") " Feb 24 00:30:03 crc kubenswrapper[4824]: I0224 00:30:03.484548 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46e264d0-74a1-43c2-8e9e-b66bf6a0ce87-config-volume" (OuterVolumeSpecName: "config-volume") pod "46e264d0-74a1-43c2-8e9e-b66bf6a0ce87" (UID: "46e264d0-74a1-43c2-8e9e-b66bf6a0ce87"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:30:03 crc kubenswrapper[4824]: I0224 00:30:03.490720 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e264d0-74a1-43c2-8e9e-b66bf6a0ce87-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "46e264d0-74a1-43c2-8e9e-b66bf6a0ce87" (UID: "46e264d0-74a1-43c2-8e9e-b66bf6a0ce87"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:30:03 crc kubenswrapper[4824]: I0224 00:30:03.490819 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e264d0-74a1-43c2-8e9e-b66bf6a0ce87-kube-api-access-gbwb6" (OuterVolumeSpecName: "kube-api-access-gbwb6") pod "46e264d0-74a1-43c2-8e9e-b66bf6a0ce87" (UID: "46e264d0-74a1-43c2-8e9e-b66bf6a0ce87"). InnerVolumeSpecName "kube-api-access-gbwb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:30:03 crc kubenswrapper[4824]: I0224 00:30:03.584893 4824 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46e264d0-74a1-43c2-8e9e-b66bf6a0ce87-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:03 crc kubenswrapper[4824]: I0224 00:30:03.584968 4824 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46e264d0-74a1-43c2-8e9e-b66bf6a0ce87-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:03 crc kubenswrapper[4824]: I0224 00:30:03.584980 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbwb6\" (UniqueName: \"kubernetes.io/projected/46e264d0-74a1-43c2-8e9e-b66bf6a0ce87-kube-api-access-gbwb6\") on node \"crc\" DevicePath \"\"" Feb 24 00:30:04 crc kubenswrapper[4824]: I0224 00:30:04.203133 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9" event={"ID":"46e264d0-74a1-43c2-8e9e-b66bf6a0ce87","Type":"ContainerDied","Data":"0c32cd2f0d416c552424fda5d5b2d830dbf1d98715059a7c909099f2008ffb45"} Feb 24 00:30:04 crc kubenswrapper[4824]: I0224 00:30:04.203627 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c32cd2f0d416c552424fda5d5b2d830dbf1d98715059a7c909099f2008ffb45" Feb 24 00:30:04 crc kubenswrapper[4824]: I0224 00:30:04.203396 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531550-fz9x9" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.227600 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-c5wht"] Feb 24 00:30:21 crc kubenswrapper[4824]: E0224 00:30:21.229016 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e264d0-74a1-43c2-8e9e-b66bf6a0ce87" containerName="collect-profiles" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.229037 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e264d0-74a1-43c2-8e9e-b66bf6a0ce87" containerName="collect-profiles" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.229231 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="46e264d0-74a1-43c2-8e9e-b66bf6a0ce87" containerName="collect-profiles" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.229956 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.234992 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.235854 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-d6d6z" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.235861 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.236203 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.236279 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.240393 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.242508 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-c5wht"] Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.246293 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.265705 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.265959 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-sasl-users\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.266061 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.266143 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6lw6\" (UniqueName: \"kubernetes.io/projected/4ffe4e04-44ea-455a-8788-47d60605ed27-kube-api-access-s6lw6\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.266218 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/4ffe4e04-44ea-455a-8788-47d60605ed27-sasl-config\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.266311 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.266390 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.367408 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.367469 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-sasl-users\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.367501 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.367541 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6lw6\" (UniqueName: \"kubernetes.io/projected/4ffe4e04-44ea-455a-8788-47d60605ed27-kube-api-access-s6lw6\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.367561 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/4ffe4e04-44ea-455a-8788-47d60605ed27-sasl-config\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.367588 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.367608 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.369183 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/4ffe4e04-44ea-455a-8788-47d60605ed27-sasl-config\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.374662 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.374688 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.374768 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.375044 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.375952 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-sasl-users\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.388891 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6lw6\" (UniqueName: \"kubernetes.io/projected/4ffe4e04-44ea-455a-8788-47d60605ed27-kube-api-access-s6lw6\") pod \"default-interconnect-68864d46cb-c5wht\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.563938 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:30:21 crc kubenswrapper[4824]: I0224 00:30:21.792354 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-c5wht"] Feb 24 00:30:22 crc kubenswrapper[4824]: I0224 00:30:22.342453 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-c5wht" event={"ID":"4ffe4e04-44ea-455a-8788-47d60605ed27","Type":"ContainerStarted","Data":"53601482595b879499df40be3a545ef10a2727ef3879843a1a926827995e2ff3"} Feb 24 00:30:28 crc kubenswrapper[4824]: I0224 00:30:28.399851 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-c5wht" event={"ID":"4ffe4e04-44ea-455a-8788-47d60605ed27","Type":"ContainerStarted","Data":"29bbfa19ec9b4d4d9603b53deb3db8ea412ca063873b7a0aadd1b1ae2c93f293"} Feb 24 00:30:28 crc kubenswrapper[4824]: I0224 00:30:28.425195 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-c5wht" podStartSLOduration=1.8167469600000001 podStartE2EDuration="7.425174869s" podCreationTimestamp="2026-02-24 00:30:21 +0000 UTC" firstStartedPulling="2026-02-24 00:30:21.802496793 +0000 UTC m=+1485.792121272" lastFinishedPulling="2026-02-24 00:30:27.410924712 +0000 UTC m=+1491.400549181" observedRunningTime="2026-02-24 00:30:28.421148609 +0000 UTC m=+1492.410773158" watchObservedRunningTime="2026-02-24 00:30:28.425174869 +0000 UTC m=+1492.414799338" Feb 24 00:30:32 crc kubenswrapper[4824]: I0224 00:30:32.974708 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 24 00:30:32 crc kubenswrapper[4824]: I0224 00:30:32.977956 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 24 00:30:32 crc kubenswrapper[4824]: I0224 00:30:32.981509 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-24dl4" Feb 24 00:30:32 crc kubenswrapper[4824]: I0224 00:30:32.981734 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Feb 24 00:30:32 crc kubenswrapper[4824]: I0224 00:30:32.981549 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Feb 24 00:30:32 crc kubenswrapper[4824]: I0224 00:30:32.981549 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Feb 24 00:30:32 crc kubenswrapper[4824]: I0224 00:30:32.982058 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-1" Feb 24 00:30:32 crc kubenswrapper[4824]: I0224 00:30:32.982113 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Feb 24 00:30:32 crc kubenswrapper[4824]: I0224 00:30:32.981557 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-2" Feb 24 00:30:32 crc kubenswrapper[4824]: I0224 00:30:32.982244 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Feb 24 00:30:32 crc kubenswrapper[4824]: I0224 00:30:32.981566 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Feb 24 00:30:32 crc kubenswrapper[4824]: I0224 00:30:32.984700 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Feb 24 00:30:32 crc kubenswrapper[4824]: I0224 00:30:32.997732 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.089912 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d1d48ccf-0bde-4748-8128-1e82ca1f302a-web-config\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.090024 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d1d48ccf-0bde-4748-8128-1e82ca1f302a-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.090049 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d1d48ccf-0bde-4748-8128-1e82ca1f302a-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.090085 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1d48ccf-0bde-4748-8128-1e82ca1f302a-config\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.090109 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d1d48ccf-0bde-4748-8128-1e82ca1f302a-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.090131 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d1d48ccf-0bde-4748-8128-1e82ca1f302a-tls-assets\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.090302 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6m9w\" (UniqueName: \"kubernetes.io/projected/d1d48ccf-0bde-4748-8128-1e82ca1f302a-kube-api-access-j6m9w\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.090398 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-53e02954-1178-4e89-ac80-46a476b99871\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53e02954-1178-4e89-ac80-46a476b99871\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.090435 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d1d48ccf-0bde-4748-8128-1e82ca1f302a-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.090479 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1d48ccf-0bde-4748-8128-1e82ca1f302a-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.090554 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d1d48ccf-0bde-4748-8128-1e82ca1f302a-config-out\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.090578 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/d1d48ccf-0bde-4748-8128-1e82ca1f302a-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.191589 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d1d48ccf-0bde-4748-8128-1e82ca1f302a-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.191950 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d1d48ccf-0bde-4748-8128-1e82ca1f302a-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.192094 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1d48ccf-0bde-4748-8128-1e82ca1f302a-config\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.192231 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d1d48ccf-0bde-4748-8128-1e82ca1f302a-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.192320 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d1d48ccf-0bde-4748-8128-1e82ca1f302a-tls-assets\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: E0224 00:30:33.192390 4824 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 24 00:30:33 crc kubenswrapper[4824]: E0224 00:30:33.192529 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1d48ccf-0bde-4748-8128-1e82ca1f302a-secret-default-prometheus-proxy-tls podName:d1d48ccf-0bde-4748-8128-1e82ca1f302a nodeName:}" failed. No retries permitted until 2026-02-24 00:30:33.69248547 +0000 UTC m=+1497.682109939 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/d1d48ccf-0bde-4748-8128-1e82ca1f302a-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "d1d48ccf-0bde-4748-8128-1e82ca1f302a") : secret "default-prometheus-proxy-tls" not found Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.192917 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d1d48ccf-0bde-4748-8128-1e82ca1f302a-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.192963 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d1d48ccf-0bde-4748-8128-1e82ca1f302a-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.193182 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6m9w\" (UniqueName: \"kubernetes.io/projected/d1d48ccf-0bde-4748-8128-1e82ca1f302a-kube-api-access-j6m9w\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.193281 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-53e02954-1178-4e89-ac80-46a476b99871\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53e02954-1178-4e89-ac80-46a476b99871\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.193385 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d1d48ccf-0bde-4748-8128-1e82ca1f302a-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.193998 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1d48ccf-0bde-4748-8128-1e82ca1f302a-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.194115 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d1d48ccf-0bde-4748-8128-1e82ca1f302a-config-out\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.194223 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/d1d48ccf-0bde-4748-8128-1e82ca1f302a-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.194818 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d1d48ccf-0bde-4748-8128-1e82ca1f302a-web-config\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.194758 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1d48ccf-0bde-4748-8128-1e82ca1f302a-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.193952 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d1d48ccf-0bde-4748-8128-1e82ca1f302a-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.196367 4824 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.196405 4824 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-53e02954-1178-4e89-ac80-46a476b99871\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53e02954-1178-4e89-ac80-46a476b99871\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a26aa4f9152d04742e9ebd850c2e40da641d10c688b7a26b812dff5f76d22587/globalmount\"" pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.199301 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d1d48ccf-0bde-4748-8128-1e82ca1f302a-config-out\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.199890 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d1d48ccf-0bde-4748-8128-1e82ca1f302a-config\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.200475 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/d1d48ccf-0bde-4748-8128-1e82ca1f302a-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.209092 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d1d48ccf-0bde-4748-8128-1e82ca1f302a-tls-assets\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.216124 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d1d48ccf-0bde-4748-8128-1e82ca1f302a-web-config\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.216854 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6m9w\" (UniqueName: \"kubernetes.io/projected/d1d48ccf-0bde-4748-8128-1e82ca1f302a-kube-api-access-j6m9w\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.233050 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-53e02954-1178-4e89-ac80-46a476b99871\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-53e02954-1178-4e89-ac80-46a476b99871\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: I0224 00:30:33.702207 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d1d48ccf-0bde-4748-8128-1e82ca1f302a-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:33 crc kubenswrapper[4824]: E0224 00:30:33.702654 4824 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 24 00:30:33 crc kubenswrapper[4824]: E0224 00:30:33.702791 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1d48ccf-0bde-4748-8128-1e82ca1f302a-secret-default-prometheus-proxy-tls podName:d1d48ccf-0bde-4748-8128-1e82ca1f302a nodeName:}" failed. No retries permitted until 2026-02-24 00:30:34.702776527 +0000 UTC m=+1498.692400996 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/d1d48ccf-0bde-4748-8128-1e82ca1f302a-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "d1d48ccf-0bde-4748-8128-1e82ca1f302a") : secret "default-prometheus-proxy-tls" not found Feb 24 00:30:34 crc kubenswrapper[4824]: I0224 00:30:34.726313 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d1d48ccf-0bde-4748-8128-1e82ca1f302a-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:34 crc kubenswrapper[4824]: I0224 00:30:34.733868 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d1d48ccf-0bde-4748-8128-1e82ca1f302a-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"d1d48ccf-0bde-4748-8128-1e82ca1f302a\") " pod="service-telemetry/prometheus-default-0" Feb 24 00:30:34 crc kubenswrapper[4824]: I0224 00:30:34.799141 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 24 00:30:35 crc kubenswrapper[4824]: I0224 00:30:35.046758 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 24 00:30:35 crc kubenswrapper[4824]: I0224 00:30:35.452905 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"d1d48ccf-0bde-4748-8128-1e82ca1f302a","Type":"ContainerStarted","Data":"7f379d1da5002e7236ec0fc7c93d073b51bc805a4b867bef73a08afcaef11ced"} Feb 24 00:30:39 crc kubenswrapper[4824]: I0224 00:30:39.490011 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"d1d48ccf-0bde-4748-8128-1e82ca1f302a","Type":"ContainerStarted","Data":"4111657b5dc645e832383c623f1d92292f4ce6c616d9b987861005d1a8191449"} Feb 24 00:30:43 crc kubenswrapper[4824]: I0224 00:30:43.819007 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-zxg6n"] Feb 24 00:30:43 crc kubenswrapper[4824]: I0224 00:30:43.820672 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-zxg6n" Feb 24 00:30:43 crc kubenswrapper[4824]: I0224 00:30:43.846375 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-zxg6n"] Feb 24 00:30:43 crc kubenswrapper[4824]: I0224 00:30:43.973792 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbj4m\" (UniqueName: \"kubernetes.io/projected/13d35d6f-04c4-438a-bda9-ce9c4ed84b99-kube-api-access-hbj4m\") pod \"default-snmp-webhook-6856cfb745-zxg6n\" (UID: \"13d35d6f-04c4-438a-bda9-ce9c4ed84b99\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-zxg6n" Feb 24 00:30:44 crc kubenswrapper[4824]: I0224 00:30:44.074948 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbj4m\" (UniqueName: \"kubernetes.io/projected/13d35d6f-04c4-438a-bda9-ce9c4ed84b99-kube-api-access-hbj4m\") pod \"default-snmp-webhook-6856cfb745-zxg6n\" (UID: \"13d35d6f-04c4-438a-bda9-ce9c4ed84b99\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-zxg6n" Feb 24 00:30:44 crc kubenswrapper[4824]: I0224 00:30:44.102710 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbj4m\" (UniqueName: \"kubernetes.io/projected/13d35d6f-04c4-438a-bda9-ce9c4ed84b99-kube-api-access-hbj4m\") pod \"default-snmp-webhook-6856cfb745-zxg6n\" (UID: \"13d35d6f-04c4-438a-bda9-ce9c4ed84b99\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-zxg6n" Feb 24 00:30:44 crc kubenswrapper[4824]: I0224 00:30:44.143304 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-zxg6n" Feb 24 00:30:44 crc kubenswrapper[4824]: I0224 00:30:44.395905 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-zxg6n"] Feb 24 00:30:44 crc kubenswrapper[4824]: I0224 00:30:44.534548 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-zxg6n" event={"ID":"13d35d6f-04c4-438a-bda9-ce9c4ed84b99","Type":"ContainerStarted","Data":"77c4391bb922438b22712263f7cf09c867bf940b570ddbbc748608267fd5072d"} Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.504153 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.507847 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.514805 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.517710 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.518035 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.518226 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-vxcg6" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.518361 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.518580 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.533743 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.557962 4824 generic.go:334] "Generic (PLEG): container finished" podID="d1d48ccf-0bde-4748-8128-1e82ca1f302a" containerID="4111657b5dc645e832383c623f1d92292f4ce6c616d9b987861005d1a8191449" exitCode=0 Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.558025 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"d1d48ccf-0bde-4748-8128-1e82ca1f302a","Type":"ContainerDied","Data":"4111657b5dc645e832383c623f1d92292f4ce6c616d9b987861005d1a8191449"} Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.635973 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b4916ffb-2e83-480a-a12f-ad04c6144517-tls-assets\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.636064 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.636095 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-config-volume\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.636233 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b4916ffb-2e83-480a-a12f-ad04c6144517-config-out\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.636439 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.636641 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-19750a56-7d5e-4445-b890-e73db1a4c7c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19750a56-7d5e-4445-b890-e73db1a4c7c1\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.636694 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.636754 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4s8v\" (UniqueName: \"kubernetes.io/projected/b4916ffb-2e83-480a-a12f-ad04c6144517-kube-api-access-k4s8v\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.636867 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-web-config\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.740465 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b4916ffb-2e83-480a-a12f-ad04c6144517-tls-assets\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.740602 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.740619 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-config-volume\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.740653 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b4916ffb-2e83-480a-a12f-ad04c6144517-config-out\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.740684 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.740757 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-19750a56-7d5e-4445-b890-e73db1a4c7c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19750a56-7d5e-4445-b890-e73db1a4c7c1\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.740787 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.740805 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4s8v\" (UniqueName: \"kubernetes.io/projected/b4916ffb-2e83-480a-a12f-ad04c6144517-kube-api-access-k4s8v\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.740831 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-web-config\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: E0224 00:30:47.742827 4824 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 24 00:30:47 crc kubenswrapper[4824]: E0224 00:30:47.742902 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-secret-default-alertmanager-proxy-tls podName:b4916ffb-2e83-480a-a12f-ad04c6144517 nodeName:}" failed. No retries permitted until 2026-02-24 00:30:48.242878191 +0000 UTC m=+1512.232502660 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "b4916ffb-2e83-480a-a12f-ad04c6144517") : secret "default-alertmanager-proxy-tls" not found Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.747971 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.749025 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-web-config\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.749241 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.749246 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b4916ffb-2e83-480a-a12f-ad04c6144517-config-out\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.749362 4824 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.749397 4824 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-19750a56-7d5e-4445-b890-e73db1a4c7c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19750a56-7d5e-4445-b890-e73db1a4c7c1\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6803be11a56e3609b692f5f0a005852564157744315862e6f0993faa3e20a471/globalmount\"" pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.749631 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b4916ffb-2e83-480a-a12f-ad04c6144517-tls-assets\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.754550 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-config-volume\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.762724 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4s8v\" (UniqueName: \"kubernetes.io/projected/b4916ffb-2e83-480a-a12f-ad04c6144517-kube-api-access-k4s8v\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:47 crc kubenswrapper[4824]: I0224 00:30:47.781718 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-19750a56-7d5e-4445-b890-e73db1a4c7c1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19750a56-7d5e-4445-b890-e73db1a4c7c1\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:48 crc kubenswrapper[4824]: I0224 00:30:48.248273 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:48 crc kubenswrapper[4824]: E0224 00:30:48.248482 4824 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 24 00:30:48 crc kubenswrapper[4824]: E0224 00:30:48.249112 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-secret-default-alertmanager-proxy-tls podName:b4916ffb-2e83-480a-a12f-ad04c6144517 nodeName:}" failed. No retries permitted until 2026-02-24 00:30:49.249082267 +0000 UTC m=+1513.238706746 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "b4916ffb-2e83-480a-a12f-ad04c6144517") : secret "default-alertmanager-proxy-tls" not found Feb 24 00:30:49 crc kubenswrapper[4824]: I0224 00:30:49.268833 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:49 crc kubenswrapper[4824]: E0224 00:30:49.269278 4824 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 24 00:30:49 crc kubenswrapper[4824]: E0224 00:30:49.269370 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-secret-default-alertmanager-proxy-tls podName:b4916ffb-2e83-480a-a12f-ad04c6144517 nodeName:}" failed. No retries permitted until 2026-02-24 00:30:51.269344443 +0000 UTC m=+1515.258968912 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "b4916ffb-2e83-480a-a12f-ad04c6144517") : secret "default-alertmanager-proxy-tls" not found Feb 24 00:30:51 crc kubenswrapper[4824]: I0224 00:30:51.314130 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:51 crc kubenswrapper[4824]: I0224 00:30:51.321153 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4916ffb-2e83-480a-a12f-ad04c6144517-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"b4916ffb-2e83-480a-a12f-ad04c6144517\") " pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:51 crc kubenswrapper[4824]: I0224 00:30:51.435853 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 24 00:30:52 crc kubenswrapper[4824]: I0224 00:30:52.337944 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 24 00:30:52 crc kubenswrapper[4824]: W0224 00:30:52.513549 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4916ffb_2e83_480a_a12f_ad04c6144517.slice/crio-d55f176c94244b485ddbdd036af985cce55607cb698647513297663dfc60fea0 WatchSource:0}: Error finding container d55f176c94244b485ddbdd036af985cce55607cb698647513297663dfc60fea0: Status 404 returned error can't find the container with id d55f176c94244b485ddbdd036af985cce55607cb698647513297663dfc60fea0 Feb 24 00:30:52 crc kubenswrapper[4824]: I0224 00:30:52.598658 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"b4916ffb-2e83-480a-a12f-ad04c6144517","Type":"ContainerStarted","Data":"d55f176c94244b485ddbdd036af985cce55607cb698647513297663dfc60fea0"} Feb 24 00:30:53 crc kubenswrapper[4824]: I0224 00:30:53.276796 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:30:53 crc kubenswrapper[4824]: I0224 00:30:53.277369 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:30:53 crc kubenswrapper[4824]: I0224 00:30:53.610310 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-zxg6n" event={"ID":"13d35d6f-04c4-438a-bda9-ce9c4ed84b99","Type":"ContainerStarted","Data":"81ee21fad53c64cdb64601823b0c6addb7dd8fe758dbfb38eaa90c6b721ee15b"} Feb 24 00:30:53 crc kubenswrapper[4824]: I0224 00:30:53.638327 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6856cfb745-zxg6n" podStartSLOduration=2.536679472 podStartE2EDuration="10.638292778s" podCreationTimestamp="2026-02-24 00:30:43 +0000 UTC" firstStartedPulling="2026-02-24 00:30:44.402763254 +0000 UTC m=+1508.392387723" lastFinishedPulling="2026-02-24 00:30:52.50437654 +0000 UTC m=+1516.494001029" observedRunningTime="2026-02-24 00:30:53.630272859 +0000 UTC m=+1517.619897328" watchObservedRunningTime="2026-02-24 00:30:53.638292778 +0000 UTC m=+1517.627917257" Feb 24 00:30:55 crc kubenswrapper[4824]: I0224 00:30:55.630791 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"b4916ffb-2e83-480a-a12f-ad04c6144517","Type":"ContainerStarted","Data":"4b9764061f062eec1646f91b5f45f22196cef8acdac4c333d92f1ae0bb6b9c29"} Feb 24 00:30:56 crc kubenswrapper[4824]: I0224 00:30:56.638715 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"d1d48ccf-0bde-4748-8128-1e82ca1f302a","Type":"ContainerStarted","Data":"eefc6044002b003b5b2092d6f6bd05b9c3c8779dc4be438198ee04599e837ded"} Feb 24 00:30:58 crc kubenswrapper[4824]: I0224 00:30:58.656142 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"d1d48ccf-0bde-4748-8128-1e82ca1f302a","Type":"ContainerStarted","Data":"b59498721549b7fd304ac678e60b55112664770494c924a17b71716353906e30"} Feb 24 00:31:01 crc kubenswrapper[4824]: I0224 00:31:01.680955 4824 generic.go:334] "Generic (PLEG): container finished" podID="b4916ffb-2e83-480a-a12f-ad04c6144517" containerID="4b9764061f062eec1646f91b5f45f22196cef8acdac4c333d92f1ae0bb6b9c29" exitCode=0 Feb 24 00:31:01 crc kubenswrapper[4824]: I0224 00:31:01.681329 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"b4916ffb-2e83-480a-a12f-ad04c6144517","Type":"ContainerDied","Data":"4b9764061f062eec1646f91b5f45f22196cef8acdac4c333d92f1ae0bb6b9c29"} Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.707749 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw"] Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.710183 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.724385 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-cqjg5" Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.724878 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.724925 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.725222 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.729302 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw"] Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.817167 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw\" (UID: \"b1b6fe19-ad2f-490e-80dc-39ed80de85b3\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.817298 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw\" (UID: \"b1b6fe19-ad2f-490e-80dc-39ed80de85b3\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.817327 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr6rv\" (UniqueName: \"kubernetes.io/projected/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-kube-api-access-mr6rv\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw\" (UID: \"b1b6fe19-ad2f-490e-80dc-39ed80de85b3\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.817423 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw\" (UID: \"b1b6fe19-ad2f-490e-80dc-39ed80de85b3\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.817796 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw\" (UID: \"b1b6fe19-ad2f-490e-80dc-39ed80de85b3\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.919292 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw\" (UID: \"b1b6fe19-ad2f-490e-80dc-39ed80de85b3\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.919616 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw\" (UID: \"b1b6fe19-ad2f-490e-80dc-39ed80de85b3\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.919651 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw\" (UID: \"b1b6fe19-ad2f-490e-80dc-39ed80de85b3\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.919668 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr6rv\" (UniqueName: \"kubernetes.io/projected/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-kube-api-access-mr6rv\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw\" (UID: \"b1b6fe19-ad2f-490e-80dc-39ed80de85b3\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.919693 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw\" (UID: \"b1b6fe19-ad2f-490e-80dc-39ed80de85b3\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:02 crc kubenswrapper[4824]: E0224 00:31:02.919923 4824 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 24 00:31:02 crc kubenswrapper[4824]: E0224 00:31:02.920051 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-default-cloud1-coll-meter-proxy-tls podName:b1b6fe19-ad2f-490e-80dc-39ed80de85b3 nodeName:}" failed. No retries permitted until 2026-02-24 00:31:03.420020012 +0000 UTC m=+1527.409644481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" (UID: "b1b6fe19-ad2f-490e-80dc-39ed80de85b3") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.920492 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw\" (UID: \"b1b6fe19-ad2f-490e-80dc-39ed80de85b3\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.920956 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw\" (UID: \"b1b6fe19-ad2f-490e-80dc-39ed80de85b3\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.931135 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw\" (UID: \"b1b6fe19-ad2f-490e-80dc-39ed80de85b3\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:02 crc kubenswrapper[4824]: I0224 00:31:02.938369 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr6rv\" (UniqueName: \"kubernetes.io/projected/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-kube-api-access-mr6rv\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw\" (UID: \"b1b6fe19-ad2f-490e-80dc-39ed80de85b3\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:03 crc kubenswrapper[4824]: I0224 00:31:03.429023 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw\" (UID: \"b1b6fe19-ad2f-490e-80dc-39ed80de85b3\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:03 crc kubenswrapper[4824]: E0224 00:31:03.429233 4824 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 24 00:31:03 crc kubenswrapper[4824]: E0224 00:31:03.429349 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-default-cloud1-coll-meter-proxy-tls podName:b1b6fe19-ad2f-490e-80dc-39ed80de85b3 nodeName:}" failed. No retries permitted until 2026-02-24 00:31:04.429322255 +0000 UTC m=+1528.418946724 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" (UID: "b1b6fe19-ad2f-490e-80dc-39ed80de85b3") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 24 00:31:04 crc kubenswrapper[4824]: I0224 00:31:04.445872 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw\" (UID: \"b1b6fe19-ad2f-490e-80dc-39ed80de85b3\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:04 crc kubenswrapper[4824]: I0224 00:31:04.451255 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b1b6fe19-ad2f-490e-80dc-39ed80de85b3-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw\" (UID: \"b1b6fe19-ad2f-490e-80dc-39ed80de85b3\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:04 crc kubenswrapper[4824]: I0224 00:31:04.535991 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" Feb 24 00:31:05 crc kubenswrapper[4824]: I0224 00:31:05.146491 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw"] Feb 24 00:31:05 crc kubenswrapper[4824]: W0224 00:31:05.308693 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1b6fe19_ad2f_490e_80dc_39ed80de85b3.slice/crio-614cf64db079533bc1cd7cc7a4fe56b9e55b2adc095beb655ac8a1fde730548e WatchSource:0}: Error finding container 614cf64db079533bc1cd7cc7a4fe56b9e55b2adc095beb655ac8a1fde730548e: Status 404 returned error can't find the container with id 614cf64db079533bc1cd7cc7a4fe56b9e55b2adc095beb655ac8a1fde730548e Feb 24 00:31:05 crc kubenswrapper[4824]: I0224 00:31:05.716127 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"d1d48ccf-0bde-4748-8128-1e82ca1f302a","Type":"ContainerStarted","Data":"7174819ab5b2a05572d6604bef6b3119a5c5bbd4ceffd358045569d1650a9930"} Feb 24 00:31:05 crc kubenswrapper[4824]: I0224 00:31:05.723622 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" event={"ID":"b1b6fe19-ad2f-490e-80dc-39ed80de85b3","Type":"ContainerStarted","Data":"614cf64db079533bc1cd7cc7a4fe56b9e55b2adc095beb655ac8a1fde730548e"} Feb 24 00:31:05 crc kubenswrapper[4824]: I0224 00:31:05.742265 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=5.049294553 podStartE2EDuration="34.742243414s" podCreationTimestamp="2026-02-24 00:30:31 +0000 UTC" firstStartedPulling="2026-02-24 00:30:35.044645724 +0000 UTC m=+1499.034270183" lastFinishedPulling="2026-02-24 00:31:04.737594575 +0000 UTC m=+1528.727219044" observedRunningTime="2026-02-24 00:31:05.741375192 +0000 UTC m=+1529.730999661" watchObservedRunningTime="2026-02-24 00:31:05.742243414 +0000 UTC m=+1529.731867873" Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.133013 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw"] Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.135006 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.138392 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.138392 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.145051 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw"] Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.274440 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/99b264a5-5103-4445-8978-942c71208377-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw\" (UID: \"99b264a5-5103-4445-8978-942c71208377\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.274544 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/99b264a5-5103-4445-8978-942c71208377-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw\" (UID: \"99b264a5-5103-4445-8978-942c71208377\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.274583 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/99b264a5-5103-4445-8978-942c71208377-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw\" (UID: \"99b264a5-5103-4445-8978-942c71208377\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.274607 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/99b264a5-5103-4445-8978-942c71208377-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw\" (UID: \"99b264a5-5103-4445-8978-942c71208377\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.274636 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssscv\" (UniqueName: \"kubernetes.io/projected/99b264a5-5103-4445-8978-942c71208377-kube-api-access-ssscv\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw\" (UID: \"99b264a5-5103-4445-8978-942c71208377\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.377143 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/99b264a5-5103-4445-8978-942c71208377-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw\" (UID: \"99b264a5-5103-4445-8978-942c71208377\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.378501 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/99b264a5-5103-4445-8978-942c71208377-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw\" (UID: \"99b264a5-5103-4445-8978-942c71208377\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.378571 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/99b264a5-5103-4445-8978-942c71208377-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw\" (UID: \"99b264a5-5103-4445-8978-942c71208377\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.378604 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/99b264a5-5103-4445-8978-942c71208377-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw\" (UID: \"99b264a5-5103-4445-8978-942c71208377\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.378631 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssscv\" (UniqueName: \"kubernetes.io/projected/99b264a5-5103-4445-8978-942c71208377-kube-api-access-ssscv\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw\" (UID: \"99b264a5-5103-4445-8978-942c71208377\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.379443 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/99b264a5-5103-4445-8978-942c71208377-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw\" (UID: \"99b264a5-5103-4445-8978-942c71208377\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:06 crc kubenswrapper[4824]: E0224 00:31:06.379698 4824 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 24 00:31:06 crc kubenswrapper[4824]: E0224 00:31:06.380279 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99b264a5-5103-4445-8978-942c71208377-default-cloud1-ceil-meter-proxy-tls podName:99b264a5-5103-4445-8978-942c71208377 nodeName:}" failed. No retries permitted until 2026-02-24 00:31:06.879782512 +0000 UTC m=+1530.869406981 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/99b264a5-5103-4445-8978-942c71208377-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" (UID: "99b264a5-5103-4445-8978-942c71208377") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.380305 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/99b264a5-5103-4445-8978-942c71208377-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw\" (UID: \"99b264a5-5103-4445-8978-942c71208377\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.388185 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/99b264a5-5103-4445-8978-942c71208377-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw\" (UID: \"99b264a5-5103-4445-8978-942c71208377\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.396589 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssscv\" (UniqueName: \"kubernetes.io/projected/99b264a5-5103-4445-8978-942c71208377-kube-api-access-ssscv\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw\" (UID: \"99b264a5-5103-4445-8978-942c71208377\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.733280 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"b4916ffb-2e83-480a-a12f-ad04c6144517","Type":"ContainerStarted","Data":"1cfc2b2dc30bbc7d2befd25ab12a19debefea13d4b2fca65ae59d07f84b29844"} Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.736887 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" event={"ID":"b1b6fe19-ad2f-490e-80dc-39ed80de85b3","Type":"ContainerStarted","Data":"3e5b1e8ee30bcdfa03bd25bd368f1687f69c352c8d4634bbd7cd8c28f9bb4e5f"} Feb 24 00:31:06 crc kubenswrapper[4824]: I0224 00:31:06.885959 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/99b264a5-5103-4445-8978-942c71208377-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw\" (UID: \"99b264a5-5103-4445-8978-942c71208377\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:06 crc kubenswrapper[4824]: E0224 00:31:06.886800 4824 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 24 00:31:06 crc kubenswrapper[4824]: E0224 00:31:06.886879 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99b264a5-5103-4445-8978-942c71208377-default-cloud1-ceil-meter-proxy-tls podName:99b264a5-5103-4445-8978-942c71208377 nodeName:}" failed. No retries permitted until 2026-02-24 00:31:07.886857949 +0000 UTC m=+1531.876482418 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/99b264a5-5103-4445-8978-942c71208377-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" (UID: "99b264a5-5103-4445-8978-942c71208377") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 24 00:31:07 crc kubenswrapper[4824]: I0224 00:31:07.746427 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" event={"ID":"b1b6fe19-ad2f-490e-80dc-39ed80de85b3","Type":"ContainerStarted","Data":"4cf792676db082f4deec5a1cc01c8abbce90a283e92e03176c0a5e3f90a5aa8c"} Feb 24 00:31:07 crc kubenswrapper[4824]: I0224 00:31:07.911234 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/99b264a5-5103-4445-8978-942c71208377-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw\" (UID: \"99b264a5-5103-4445-8978-942c71208377\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:07 crc kubenswrapper[4824]: I0224 00:31:07.921776 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/99b264a5-5103-4445-8978-942c71208377-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw\" (UID: \"99b264a5-5103-4445-8978-942c71208377\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:07 crc kubenswrapper[4824]: I0224 00:31:07.961651 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" Feb 24 00:31:08 crc kubenswrapper[4824]: I0224 00:31:08.545632 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw"] Feb 24 00:31:08 crc kubenswrapper[4824]: W0224 00:31:08.555719 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99b264a5_5103_4445_8978_942c71208377.slice/crio-7de16a116602e343519bc1cd05669beb311e65cc5f4b48cc665c5014e2053893 WatchSource:0}: Error finding container 7de16a116602e343519bc1cd05669beb311e65cc5f4b48cc665c5014e2053893: Status 404 returned error can't find the container with id 7de16a116602e343519bc1cd05669beb311e65cc5f4b48cc665c5014e2053893 Feb 24 00:31:08 crc kubenswrapper[4824]: I0224 00:31:08.757061 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"b4916ffb-2e83-480a-a12f-ad04c6144517","Type":"ContainerStarted","Data":"53f1e37d750e2126213d8e20208c38aef9ce0ac22c9b625149e85917d94dbf92"} Feb 24 00:31:08 crc kubenswrapper[4824]: I0224 00:31:08.760501 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" event={"ID":"99b264a5-5103-4445-8978-942c71208377","Type":"ContainerStarted","Data":"7de16a116602e343519bc1cd05669beb311e65cc5f4b48cc665c5014e2053893"} Feb 24 00:31:09 crc kubenswrapper[4824]: I0224 00:31:09.799807 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.233276 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84"] Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.241011 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.246088 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.246221 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.251181 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84"] Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.355843 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/25d3b43f-0bff-44ca-83f4-b8a0052cd764-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84\" (UID: \"25d3b43f-0bff-44ca-83f4-b8a0052cd764\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.355915 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/25d3b43f-0bff-44ca-83f4-b8a0052cd764-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84\" (UID: \"25d3b43f-0bff-44ca-83f4-b8a0052cd764\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.356084 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/25d3b43f-0bff-44ca-83f4-b8a0052cd764-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84\" (UID: \"25d3b43f-0bff-44ca-83f4-b8a0052cd764\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.356248 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb569\" (UniqueName: \"kubernetes.io/projected/25d3b43f-0bff-44ca-83f4-b8a0052cd764-kube-api-access-gb569\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84\" (UID: \"25d3b43f-0bff-44ca-83f4-b8a0052cd764\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.356277 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/25d3b43f-0bff-44ca-83f4-b8a0052cd764-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84\" (UID: \"25d3b43f-0bff-44ca-83f4-b8a0052cd764\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.458157 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/25d3b43f-0bff-44ca-83f4-b8a0052cd764-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84\" (UID: \"25d3b43f-0bff-44ca-83f4-b8a0052cd764\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.458265 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb569\" (UniqueName: \"kubernetes.io/projected/25d3b43f-0bff-44ca-83f4-b8a0052cd764-kube-api-access-gb569\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84\" (UID: \"25d3b43f-0bff-44ca-83f4-b8a0052cd764\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.458323 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/25d3b43f-0bff-44ca-83f4-b8a0052cd764-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84\" (UID: \"25d3b43f-0bff-44ca-83f4-b8a0052cd764\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.458388 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/25d3b43f-0bff-44ca-83f4-b8a0052cd764-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84\" (UID: \"25d3b43f-0bff-44ca-83f4-b8a0052cd764\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.458430 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/25d3b43f-0bff-44ca-83f4-b8a0052cd764-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84\" (UID: \"25d3b43f-0bff-44ca-83f4-b8a0052cd764\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:10 crc kubenswrapper[4824]: E0224 00:31:10.458937 4824 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.458980 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/25d3b43f-0bff-44ca-83f4-b8a0052cd764-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84\" (UID: \"25d3b43f-0bff-44ca-83f4-b8a0052cd764\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:10 crc kubenswrapper[4824]: E0224 00:31:10.459035 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25d3b43f-0bff-44ca-83f4-b8a0052cd764-default-cloud1-sens-meter-proxy-tls podName:25d3b43f-0bff-44ca-83f4-b8a0052cd764 nodeName:}" failed. No retries permitted until 2026-02-24 00:31:10.959005991 +0000 UTC m=+1534.948630470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/25d3b43f-0bff-44ca-83f4-b8a0052cd764-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" (UID: "25d3b43f-0bff-44ca-83f4-b8a0052cd764") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.462106 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/25d3b43f-0bff-44ca-83f4-b8a0052cd764-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84\" (UID: \"25d3b43f-0bff-44ca-83f4-b8a0052cd764\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.472074 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/25d3b43f-0bff-44ca-83f4-b8a0052cd764-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84\" (UID: \"25d3b43f-0bff-44ca-83f4-b8a0052cd764\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.480685 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb569\" (UniqueName: \"kubernetes.io/projected/25d3b43f-0bff-44ca-83f4-b8a0052cd764-kube-api-access-gb569\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84\" (UID: \"25d3b43f-0bff-44ca-83f4-b8a0052cd764\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:10 crc kubenswrapper[4824]: I0224 00:31:10.966098 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/25d3b43f-0bff-44ca-83f4-b8a0052cd764-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84\" (UID: \"25d3b43f-0bff-44ca-83f4-b8a0052cd764\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:10 crc kubenswrapper[4824]: E0224 00:31:10.966328 4824 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 24 00:31:10 crc kubenswrapper[4824]: E0224 00:31:10.966400 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25d3b43f-0bff-44ca-83f4-b8a0052cd764-default-cloud1-sens-meter-proxy-tls podName:25d3b43f-0bff-44ca-83f4-b8a0052cd764 nodeName:}" failed. No retries permitted until 2026-02-24 00:31:11.966379565 +0000 UTC m=+1535.956004034 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/25d3b43f-0bff-44ca-83f4-b8a0052cd764-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" (UID: "25d3b43f-0bff-44ca-83f4-b8a0052cd764") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 24 00:31:11 crc kubenswrapper[4824]: I0224 00:31:11.984351 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/25d3b43f-0bff-44ca-83f4-b8a0052cd764-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84\" (UID: \"25d3b43f-0bff-44ca-83f4-b8a0052cd764\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:12 crc kubenswrapper[4824]: I0224 00:31:12.002794 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/25d3b43f-0bff-44ca-83f4-b8a0052cd764-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84\" (UID: \"25d3b43f-0bff-44ca-83f4-b8a0052cd764\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:12 crc kubenswrapper[4824]: I0224 00:31:12.074137 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" Feb 24 00:31:12 crc kubenswrapper[4824]: I0224 00:31:12.567924 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84"] Feb 24 00:31:12 crc kubenswrapper[4824]: I0224 00:31:12.804601 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" event={"ID":"b1b6fe19-ad2f-490e-80dc-39ed80de85b3","Type":"ContainerStarted","Data":"9f0bb9f941494073f43473a4ed1758329780d03e5d150223a5f35671ee6f55e9"} Feb 24 00:31:12 crc kubenswrapper[4824]: I0224 00:31:12.807165 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" event={"ID":"25d3b43f-0bff-44ca-83f4-b8a0052cd764","Type":"ContainerStarted","Data":"1f54259e5730b34a2560d4dc934811944ebe1241cd69840a98777fa2d1414c52"} Feb 24 00:31:12 crc kubenswrapper[4824]: I0224 00:31:12.809483 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"b4916ffb-2e83-480a-a12f-ad04c6144517","Type":"ContainerStarted","Data":"9bd11c389c536cfcd9cd6ffcc50ee95ec727476440d2570fe4ce41006cf9b49e"} Feb 24 00:31:12 crc kubenswrapper[4824]: I0224 00:31:12.812132 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" event={"ID":"99b264a5-5103-4445-8978-942c71208377","Type":"ContainerStarted","Data":"46e1822d839098966fca9d2fe9fd88f95716adc7230f55dbadcd96a250ec0b01"} Feb 24 00:31:12 crc kubenswrapper[4824]: I0224 00:31:12.812165 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" event={"ID":"99b264a5-5103-4445-8978-942c71208377","Type":"ContainerStarted","Data":"cc2f4bc7d66d0e52e1f46fb202f9d99c81667e99fccded51f463888fd413644e"} Feb 24 00:31:12 crc kubenswrapper[4824]: I0224 00:31:12.825377 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" podStartSLOduration=4.073564784 podStartE2EDuration="10.825355807s" podCreationTimestamp="2026-02-24 00:31:02 +0000 UTC" firstStartedPulling="2026-02-24 00:31:05.319618814 +0000 UTC m=+1529.309243283" lastFinishedPulling="2026-02-24 00:31:12.071409837 +0000 UTC m=+1536.061034306" observedRunningTime="2026-02-24 00:31:12.823682835 +0000 UTC m=+1536.813307304" watchObservedRunningTime="2026-02-24 00:31:12.825355807 +0000 UTC m=+1536.814980276" Feb 24 00:31:12 crc kubenswrapper[4824]: I0224 00:31:12.868269 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=16.483218921 podStartE2EDuration="26.868237292s" podCreationTimestamp="2026-02-24 00:30:46 +0000 UTC" firstStartedPulling="2026-02-24 00:31:01.684087498 +0000 UTC m=+1525.673711967" lastFinishedPulling="2026-02-24 00:31:12.069105869 +0000 UTC m=+1536.058730338" observedRunningTime="2026-02-24 00:31:12.867204417 +0000 UTC m=+1536.856828896" watchObservedRunningTime="2026-02-24 00:31:12.868237292 +0000 UTC m=+1536.857861771" Feb 24 00:31:13 crc kubenswrapper[4824]: I0224 00:31:13.826676 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" event={"ID":"25d3b43f-0bff-44ca-83f4-b8a0052cd764","Type":"ContainerStarted","Data":"0501ae22fa95c619ea2d8a7be3462d1ccb662e4fd126eb9acfa5460631d820d1"} Feb 24 00:31:13 crc kubenswrapper[4824]: I0224 00:31:13.826756 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" event={"ID":"25d3b43f-0bff-44ca-83f4-b8a0052cd764","Type":"ContainerStarted","Data":"a300490310e1c3ebb42c85c5a7fa61913de98e5750b6b6f122bc2e555fca562a"} Feb 24 00:31:13 crc kubenswrapper[4824]: I0224 00:31:13.826774 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" event={"ID":"25d3b43f-0bff-44ca-83f4-b8a0052cd764","Type":"ContainerStarted","Data":"57ab9185344c6438c10648348978fb546b25c744a42bd3170859d1b119ae7996"} Feb 24 00:31:13 crc kubenswrapper[4824]: I0224 00:31:13.837968 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" event={"ID":"99b264a5-5103-4445-8978-942c71208377","Type":"ContainerStarted","Data":"67b48a5c690377762558d40a05b5c8223d7b5f2dbb2757400994545f20b3efeb"} Feb 24 00:31:13 crc kubenswrapper[4824]: I0224 00:31:13.848797 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" podStartSLOduration=2.784867101 podStartE2EDuration="3.848772662s" podCreationTimestamp="2026-02-24 00:31:10 +0000 UTC" firstStartedPulling="2026-02-24 00:31:12.59282356 +0000 UTC m=+1536.582448029" lastFinishedPulling="2026-02-24 00:31:13.656729131 +0000 UTC m=+1537.646353590" observedRunningTime="2026-02-24 00:31:13.847304805 +0000 UTC m=+1537.836929284" watchObservedRunningTime="2026-02-24 00:31:13.848772662 +0000 UTC m=+1537.838397131" Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.279193 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" podStartSLOduration=8.081729958 podStartE2EDuration="12.279165874s" podCreationTimestamp="2026-02-24 00:31:06 +0000 UTC" firstStartedPulling="2026-02-24 00:31:08.561412349 +0000 UTC m=+1532.551036818" lastFinishedPulling="2026-02-24 00:31:12.758848265 +0000 UTC m=+1536.748472734" observedRunningTime="2026-02-24 00:31:13.879370692 +0000 UTC m=+1537.868995181" watchObservedRunningTime="2026-02-24 00:31:18.279165874 +0000 UTC m=+1542.268790353" Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.286007 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt"] Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.287628 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.290856 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.305023 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.306894 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt"] Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.408001 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/77c39bbc-adcc-40f9-afe2-9d97f93262b9-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt\" (UID: \"77c39bbc-adcc-40f9-afe2-9d97f93262b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.408399 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqxlg\" (UniqueName: \"kubernetes.io/projected/77c39bbc-adcc-40f9-afe2-9d97f93262b9-kube-api-access-fqxlg\") pod \"default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt\" (UID: \"77c39bbc-adcc-40f9-afe2-9d97f93262b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.408620 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/77c39bbc-adcc-40f9-afe2-9d97f93262b9-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt\" (UID: \"77c39bbc-adcc-40f9-afe2-9d97f93262b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.408936 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/77c39bbc-adcc-40f9-afe2-9d97f93262b9-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt\" (UID: \"77c39bbc-adcc-40f9-afe2-9d97f93262b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.510642 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/77c39bbc-adcc-40f9-afe2-9d97f93262b9-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt\" (UID: \"77c39bbc-adcc-40f9-afe2-9d97f93262b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.510713 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqxlg\" (UniqueName: \"kubernetes.io/projected/77c39bbc-adcc-40f9-afe2-9d97f93262b9-kube-api-access-fqxlg\") pod \"default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt\" (UID: \"77c39bbc-adcc-40f9-afe2-9d97f93262b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.510762 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/77c39bbc-adcc-40f9-afe2-9d97f93262b9-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt\" (UID: \"77c39bbc-adcc-40f9-afe2-9d97f93262b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.510834 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/77c39bbc-adcc-40f9-afe2-9d97f93262b9-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt\" (UID: \"77c39bbc-adcc-40f9-afe2-9d97f93262b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.511761 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/77c39bbc-adcc-40f9-afe2-9d97f93262b9-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt\" (UID: \"77c39bbc-adcc-40f9-afe2-9d97f93262b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.512257 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/77c39bbc-adcc-40f9-afe2-9d97f93262b9-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt\" (UID: \"77c39bbc-adcc-40f9-afe2-9d97f93262b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.524690 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/77c39bbc-adcc-40f9-afe2-9d97f93262b9-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt\" (UID: \"77c39bbc-adcc-40f9-afe2-9d97f93262b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.530161 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqxlg\" (UniqueName: \"kubernetes.io/projected/77c39bbc-adcc-40f9-afe2-9d97f93262b9-kube-api-access-fqxlg\") pod \"default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt\" (UID: \"77c39bbc-adcc-40f9-afe2-9d97f93262b9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" Feb 24 00:31:18 crc kubenswrapper[4824]: I0224 00:31:18.623840 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" Feb 24 00:31:19 crc kubenswrapper[4824]: I0224 00:31:19.105535 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt"] Feb 24 00:31:19 crc kubenswrapper[4824]: W0224 00:31:19.108839 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77c39bbc_adcc_40f9_afe2_9d97f93262b9.slice/crio-e34c44e7de1ec67024a6591049eeba99e994bf04ba4730c05dd2dfad7d208d96 WatchSource:0}: Error finding container e34c44e7de1ec67024a6591049eeba99e994bf04ba4730c05dd2dfad7d208d96: Status 404 returned error can't find the container with id e34c44e7de1ec67024a6591049eeba99e994bf04ba4730c05dd2dfad7d208d96 Feb 24 00:31:19 crc kubenswrapper[4824]: I0224 00:31:19.799835 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Feb 24 00:31:19 crc kubenswrapper[4824]: I0224 00:31:19.848483 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Feb 24 00:31:19 crc kubenswrapper[4824]: I0224 00:31:19.860613 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc"] Feb 24 00:31:19 crc kubenswrapper[4824]: I0224 00:31:19.869778 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" Feb 24 00:31:19 crc kubenswrapper[4824]: I0224 00:31:19.883465 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc"] Feb 24 00:31:19 crc kubenswrapper[4824]: I0224 00:31:19.884714 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Feb 24 00:31:19 crc kubenswrapper[4824]: I0224 00:31:19.925330 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" event={"ID":"77c39bbc-adcc-40f9-afe2-9d97f93262b9","Type":"ContainerStarted","Data":"e34c44e7de1ec67024a6591049eeba99e994bf04ba4730c05dd2dfad7d208d96"} Feb 24 00:31:19 crc kubenswrapper[4824]: I0224 00:31:19.937819 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/19cb1d3e-5363-406a-a5f4-ecfe04edd347-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc\" (UID: \"19cb1d3e-5363-406a-a5f4-ecfe04edd347\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" Feb 24 00:31:19 crc kubenswrapper[4824]: I0224 00:31:19.937969 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/19cb1d3e-5363-406a-a5f4-ecfe04edd347-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc\" (UID: \"19cb1d3e-5363-406a-a5f4-ecfe04edd347\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" Feb 24 00:31:19 crc kubenswrapper[4824]: I0224 00:31:19.938121 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxwk9\" (UniqueName: \"kubernetes.io/projected/19cb1d3e-5363-406a-a5f4-ecfe04edd347-kube-api-access-gxwk9\") pod \"default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc\" (UID: \"19cb1d3e-5363-406a-a5f4-ecfe04edd347\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" Feb 24 00:31:19 crc kubenswrapper[4824]: I0224 00:31:19.938175 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/19cb1d3e-5363-406a-a5f4-ecfe04edd347-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc\" (UID: \"19cb1d3e-5363-406a-a5f4-ecfe04edd347\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" Feb 24 00:31:19 crc kubenswrapper[4824]: I0224 00:31:19.974997 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Feb 24 00:31:20 crc kubenswrapper[4824]: I0224 00:31:20.039820 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/19cb1d3e-5363-406a-a5f4-ecfe04edd347-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc\" (UID: \"19cb1d3e-5363-406a-a5f4-ecfe04edd347\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" Feb 24 00:31:20 crc kubenswrapper[4824]: I0224 00:31:20.041005 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxwk9\" (UniqueName: \"kubernetes.io/projected/19cb1d3e-5363-406a-a5f4-ecfe04edd347-kube-api-access-gxwk9\") pod \"default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc\" (UID: \"19cb1d3e-5363-406a-a5f4-ecfe04edd347\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" Feb 24 00:31:20 crc kubenswrapper[4824]: I0224 00:31:20.041087 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/19cb1d3e-5363-406a-a5f4-ecfe04edd347-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc\" (UID: \"19cb1d3e-5363-406a-a5f4-ecfe04edd347\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" Feb 24 00:31:20 crc kubenswrapper[4824]: I0224 00:31:20.041237 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/19cb1d3e-5363-406a-a5f4-ecfe04edd347-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc\" (UID: \"19cb1d3e-5363-406a-a5f4-ecfe04edd347\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" Feb 24 00:31:20 crc kubenswrapper[4824]: I0224 00:31:20.042072 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/19cb1d3e-5363-406a-a5f4-ecfe04edd347-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc\" (UID: \"19cb1d3e-5363-406a-a5f4-ecfe04edd347\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" Feb 24 00:31:20 crc kubenswrapper[4824]: I0224 00:31:20.042241 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/19cb1d3e-5363-406a-a5f4-ecfe04edd347-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc\" (UID: \"19cb1d3e-5363-406a-a5f4-ecfe04edd347\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" Feb 24 00:31:20 crc kubenswrapper[4824]: I0224 00:31:20.059992 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/19cb1d3e-5363-406a-a5f4-ecfe04edd347-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc\" (UID: \"19cb1d3e-5363-406a-a5f4-ecfe04edd347\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" Feb 24 00:31:20 crc kubenswrapper[4824]: I0224 00:31:20.070991 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxwk9\" (UniqueName: \"kubernetes.io/projected/19cb1d3e-5363-406a-a5f4-ecfe04edd347-kube-api-access-gxwk9\") pod \"default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc\" (UID: \"19cb1d3e-5363-406a-a5f4-ecfe04edd347\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" Feb 24 00:31:20 crc kubenswrapper[4824]: I0224 00:31:20.257389 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" Feb 24 00:31:20 crc kubenswrapper[4824]: I0224 00:31:20.615123 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc"] Feb 24 00:31:20 crc kubenswrapper[4824]: W0224 00:31:20.630390 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19cb1d3e_5363_406a_a5f4_ecfe04edd347.slice/crio-9ff67cab7f3120f0523b4e0ad4234822c8511472e1112e376dcc9c742e509cd6 WatchSource:0}: Error finding container 9ff67cab7f3120f0523b4e0ad4234822c8511472e1112e376dcc9c742e509cd6: Status 404 returned error can't find the container with id 9ff67cab7f3120f0523b4e0ad4234822c8511472e1112e376dcc9c742e509cd6 Feb 24 00:31:20 crc kubenswrapper[4824]: I0224 00:31:20.933010 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" event={"ID":"19cb1d3e-5363-406a-a5f4-ecfe04edd347","Type":"ContainerStarted","Data":"5d52767b4ff2eefa3a301cf99e13e98ce67da689c9800b168e08d2bdb25f9f50"} Feb 24 00:31:20 crc kubenswrapper[4824]: I0224 00:31:20.933528 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" event={"ID":"19cb1d3e-5363-406a-a5f4-ecfe04edd347","Type":"ContainerStarted","Data":"9ff67cab7f3120f0523b4e0ad4234822c8511472e1112e376dcc9c742e509cd6"} Feb 24 00:31:20 crc kubenswrapper[4824]: I0224 00:31:20.936571 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" event={"ID":"77c39bbc-adcc-40f9-afe2-9d97f93262b9","Type":"ContainerStarted","Data":"be8a0df6436c28c142600bd78cd1d720ab2f7272de54afd3e686be824d339778"} Feb 24 00:31:20 crc kubenswrapper[4824]: I0224 00:31:20.936661 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" event={"ID":"77c39bbc-adcc-40f9-afe2-9d97f93262b9","Type":"ContainerStarted","Data":"c4a8fb8018686ea387d812e3112b01226a43b249311b6283fbdc21bb7472950f"} Feb 24 00:31:20 crc kubenswrapper[4824]: I0224 00:31:20.967015 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" podStartSLOduration=1.883486349 podStartE2EDuration="2.966975196s" podCreationTimestamp="2026-02-24 00:31:18 +0000 UTC" firstStartedPulling="2026-02-24 00:31:19.112056395 +0000 UTC m=+1543.101680864" lastFinishedPulling="2026-02-24 00:31:20.195545242 +0000 UTC m=+1544.185169711" observedRunningTime="2026-02-24 00:31:20.961178032 +0000 UTC m=+1544.950802521" watchObservedRunningTime="2026-02-24 00:31:20.966975196 +0000 UTC m=+1544.956599675" Feb 24 00:31:21 crc kubenswrapper[4824]: I0224 00:31:21.950111 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" event={"ID":"19cb1d3e-5363-406a-a5f4-ecfe04edd347","Type":"ContainerStarted","Data":"80ace817a383f22df03d441b3f0d005d9ec376e63a99860d577018e17a1d784e"} Feb 24 00:31:21 crc kubenswrapper[4824]: I0224 00:31:21.978550 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" podStartSLOduration=2.5804589570000003 podStartE2EDuration="2.978508656s" podCreationTimestamp="2026-02-24 00:31:19 +0000 UTC" firstStartedPulling="2026-02-24 00:31:20.633900362 +0000 UTC m=+1544.623524831" lastFinishedPulling="2026-02-24 00:31:21.031950071 +0000 UTC m=+1545.021574530" observedRunningTime="2026-02-24 00:31:21.976567578 +0000 UTC m=+1545.966192057" watchObservedRunningTime="2026-02-24 00:31:21.978508656 +0000 UTC m=+1545.968133125" Feb 24 00:31:23 crc kubenswrapper[4824]: I0224 00:31:23.276087 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:31:23 crc kubenswrapper[4824]: I0224 00:31:23.277363 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:31:33 crc kubenswrapper[4824]: I0224 00:31:33.463207 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-c5wht"] Feb 24 00:31:33 crc kubenswrapper[4824]: I0224 00:31:33.464469 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-c5wht" podUID="4ffe4e04-44ea-455a-8788-47d60605ed27" containerName="default-interconnect" containerID="cri-o://29bbfa19ec9b4d4d9603b53deb3db8ea412ca063873b7a0aadd1b1ae2c93f293" gracePeriod=30 Feb 24 00:31:33 crc kubenswrapper[4824]: I0224 00:31:33.888079 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.050107 4824 generic.go:334] "Generic (PLEG): container finished" podID="19cb1d3e-5363-406a-a5f4-ecfe04edd347" containerID="5d52767b4ff2eefa3a301cf99e13e98ce67da689c9800b168e08d2bdb25f9f50" exitCode=0 Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.050201 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" event={"ID":"19cb1d3e-5363-406a-a5f4-ecfe04edd347","Type":"ContainerDied","Data":"5d52767b4ff2eefa3a301cf99e13e98ce67da689c9800b168e08d2bdb25f9f50"} Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.051127 4824 scope.go:117] "RemoveContainer" containerID="5d52767b4ff2eefa3a301cf99e13e98ce67da689c9800b168e08d2bdb25f9f50" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.053071 4824 generic.go:334] "Generic (PLEG): container finished" podID="b1b6fe19-ad2f-490e-80dc-39ed80de85b3" containerID="4cf792676db082f4deec5a1cc01c8abbce90a283e92e03176c0a5e3f90a5aa8c" exitCode=0 Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.053118 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" event={"ID":"b1b6fe19-ad2f-490e-80dc-39ed80de85b3","Type":"ContainerDied","Data":"4cf792676db082f4deec5a1cc01c8abbce90a283e92e03176c0a5e3f90a5aa8c"} Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.054036 4824 scope.go:117] "RemoveContainer" containerID="4cf792676db082f4deec5a1cc01c8abbce90a283e92e03176c0a5e3f90a5aa8c" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.054929 4824 generic.go:334] "Generic (PLEG): container finished" podID="4ffe4e04-44ea-455a-8788-47d60605ed27" containerID="29bbfa19ec9b4d4d9603b53deb3db8ea412ca063873b7a0aadd1b1ae2c93f293" exitCode=0 Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.055052 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-c5wht" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.055191 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-c5wht" event={"ID":"4ffe4e04-44ea-455a-8788-47d60605ed27","Type":"ContainerDied","Data":"29bbfa19ec9b4d4d9603b53deb3db8ea412ca063873b7a0aadd1b1ae2c93f293"} Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.055312 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-c5wht" event={"ID":"4ffe4e04-44ea-455a-8788-47d60605ed27","Type":"ContainerDied","Data":"53601482595b879499df40be3a545ef10a2727ef3879843a1a926827995e2ff3"} Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.055333 4824 scope.go:117] "RemoveContainer" containerID="29bbfa19ec9b4d4d9603b53deb3db8ea412ca063873b7a0aadd1b1ae2c93f293" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.063641 4824 generic.go:334] "Generic (PLEG): container finished" podID="25d3b43f-0bff-44ca-83f4-b8a0052cd764" containerID="a300490310e1c3ebb42c85c5a7fa61913de98e5750b6b6f122bc2e555fca562a" exitCode=0 Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.063723 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" event={"ID":"25d3b43f-0bff-44ca-83f4-b8a0052cd764","Type":"ContainerDied","Data":"a300490310e1c3ebb42c85c5a7fa61913de98e5750b6b6f122bc2e555fca562a"} Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.064577 4824 scope.go:117] "RemoveContainer" containerID="a300490310e1c3ebb42c85c5a7fa61913de98e5750b6b6f122bc2e555fca562a" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.077943 4824 generic.go:334] "Generic (PLEG): container finished" podID="99b264a5-5103-4445-8978-942c71208377" containerID="46e1822d839098966fca9d2fe9fd88f95716adc7230f55dbadcd96a250ec0b01" exitCode=0 Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.078010 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" event={"ID":"99b264a5-5103-4445-8978-942c71208377","Type":"ContainerDied","Data":"46e1822d839098966fca9d2fe9fd88f95716adc7230f55dbadcd96a250ec0b01"} Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.078840 4824 scope.go:117] "RemoveContainer" containerID="46e1822d839098966fca9d2fe9fd88f95716adc7230f55dbadcd96a250ec0b01" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.084788 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6lw6\" (UniqueName: \"kubernetes.io/projected/4ffe4e04-44ea-455a-8788-47d60605ed27-kube-api-access-s6lw6\") pod \"4ffe4e04-44ea-455a-8788-47d60605ed27\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.084864 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-sasl-users\") pod \"4ffe4e04-44ea-455a-8788-47d60605ed27\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.084925 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-inter-router-credentials\") pod \"4ffe4e04-44ea-455a-8788-47d60605ed27\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.084983 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-inter-router-ca\") pod \"4ffe4e04-44ea-455a-8788-47d60605ed27\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.085034 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-openstack-ca\") pod \"4ffe4e04-44ea-455a-8788-47d60605ed27\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.085084 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/4ffe4e04-44ea-455a-8788-47d60605ed27-sasl-config\") pod \"4ffe4e04-44ea-455a-8788-47d60605ed27\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.085133 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-openstack-credentials\") pod \"4ffe4e04-44ea-455a-8788-47d60605ed27\" (UID: \"4ffe4e04-44ea-455a-8788-47d60605ed27\") " Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.089893 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ffe4e04-44ea-455a-8788-47d60605ed27-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "4ffe4e04-44ea-455a-8788-47d60605ed27" (UID: "4ffe4e04-44ea-455a-8788-47d60605ed27"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.090678 4824 scope.go:117] "RemoveContainer" containerID="29bbfa19ec9b4d4d9603b53deb3db8ea412ca063873b7a0aadd1b1ae2c93f293" Feb 24 00:31:34 crc kubenswrapper[4824]: E0224 00:31:34.092678 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29bbfa19ec9b4d4d9603b53deb3db8ea412ca063873b7a0aadd1b1ae2c93f293\": container with ID starting with 29bbfa19ec9b4d4d9603b53deb3db8ea412ca063873b7a0aadd1b1ae2c93f293 not found: ID does not exist" containerID="29bbfa19ec9b4d4d9603b53deb3db8ea412ca063873b7a0aadd1b1ae2c93f293" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.092803 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29bbfa19ec9b4d4d9603b53deb3db8ea412ca063873b7a0aadd1b1ae2c93f293"} err="failed to get container status \"29bbfa19ec9b4d4d9603b53deb3db8ea412ca063873b7a0aadd1b1ae2c93f293\": rpc error: code = NotFound desc = could not find container \"29bbfa19ec9b4d4d9603b53deb3db8ea412ca063873b7a0aadd1b1ae2c93f293\": container with ID starting with 29bbfa19ec9b4d4d9603b53deb3db8ea412ca063873b7a0aadd1b1ae2c93f293 not found: ID does not exist" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.093692 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "4ffe4e04-44ea-455a-8788-47d60605ed27" (UID: "4ffe4e04-44ea-455a-8788-47d60605ed27"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.103269 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "4ffe4e04-44ea-455a-8788-47d60605ed27" (UID: "4ffe4e04-44ea-455a-8788-47d60605ed27"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.103421 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "4ffe4e04-44ea-455a-8788-47d60605ed27" (UID: "4ffe4e04-44ea-455a-8788-47d60605ed27"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.104838 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ffe4e04-44ea-455a-8788-47d60605ed27-kube-api-access-s6lw6" (OuterVolumeSpecName: "kube-api-access-s6lw6") pod "4ffe4e04-44ea-455a-8788-47d60605ed27" (UID: "4ffe4e04-44ea-455a-8788-47d60605ed27"). InnerVolumeSpecName "kube-api-access-s6lw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.109286 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "4ffe4e04-44ea-455a-8788-47d60605ed27" (UID: "4ffe4e04-44ea-455a-8788-47d60605ed27"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.122158 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "4ffe4e04-44ea-455a-8788-47d60605ed27" (UID: "4ffe4e04-44ea-455a-8788-47d60605ed27"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.187258 4824 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/4ffe4e04-44ea-455a-8788-47d60605ed27-sasl-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.187297 4824 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.187309 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6lw6\" (UniqueName: \"kubernetes.io/projected/4ffe4e04-44ea-455a-8788-47d60605ed27-kube-api-access-s6lw6\") on node \"crc\" DevicePath \"\"" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.187339 4824 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-sasl-users\") on node \"crc\" DevicePath \"\"" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.187349 4824 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.187362 4824 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.187373 4824 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/4ffe4e04-44ea-455a-8788-47d60605ed27-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.405953 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-c5wht"] Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.406533 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-c5wht"] Feb 24 00:31:34 crc kubenswrapper[4824]: I0224 00:31:34.708295 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ffe4e04-44ea-455a-8788-47d60605ed27" path="/var/lib/kubelet/pods/4ffe4e04-44ea-455a-8788-47d60605ed27/volumes" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.094759 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" event={"ID":"25d3b43f-0bff-44ca-83f4-b8a0052cd764","Type":"ContainerStarted","Data":"580e4473703a9287d77dfa0282262b828fdf928feb45462d2929a259ad5db38e"} Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.115840 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-76s9x"] Feb 24 00:31:35 crc kubenswrapper[4824]: E0224 00:31:35.116485 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffe4e04-44ea-455a-8788-47d60605ed27" containerName="default-interconnect" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.116509 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffe4e04-44ea-455a-8788-47d60605ed27" containerName="default-interconnect" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.117058 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ffe4e04-44ea-455a-8788-47d60605ed27" containerName="default-interconnect" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.118425 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" event={"ID":"99b264a5-5103-4445-8978-942c71208377","Type":"ContainerStarted","Data":"895f0c841b08d7b22272f1ad5924dc1a69f338d44d989cee670afa4ce10c5b44"} Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.120907 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.125226 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.125483 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.125687 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-76s9x"] Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.125719 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.126775 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" event={"ID":"19cb1d3e-5363-406a-a5f4-ecfe04edd347","Type":"ContainerStarted","Data":"eeddf94c39afeaf91d3aad6e7e7b037e372c6ac5b7c06815ce6a33281b134212"} Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.128434 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-d6d6z" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.130136 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.130177 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.130477 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.163080 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" event={"ID":"b1b6fe19-ad2f-490e-80dc-39ed80de85b3","Type":"ContainerStarted","Data":"9daa99339d0f6cbad236603de0bc2f34e81e039ff7ee664ab10491e20bc297e0"} Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.201297 4824 generic.go:334] "Generic (PLEG): container finished" podID="77c39bbc-adcc-40f9-afe2-9d97f93262b9" containerID="c4a8fb8018686ea387d812e3112b01226a43b249311b6283fbdc21bb7472950f" exitCode=0 Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.201349 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" event={"ID":"77c39bbc-adcc-40f9-afe2-9d97f93262b9","Type":"ContainerDied","Data":"c4a8fb8018686ea387d812e3112b01226a43b249311b6283fbdc21bb7472950f"} Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.202025 4824 scope.go:117] "RemoveContainer" containerID="c4a8fb8018686ea387d812e3112b01226a43b249311b6283fbdc21bb7472950f" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.225435 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/58055cab-656a-46f2-a3e6-ab76d8943362-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.225534 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/58055cab-656a-46f2-a3e6-ab76d8943362-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.225576 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/58055cab-656a-46f2-a3e6-ab76d8943362-sasl-users\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.225680 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/58055cab-656a-46f2-a3e6-ab76d8943362-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.225755 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/58055cab-656a-46f2-a3e6-ab76d8943362-sasl-config\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.225888 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fxr9\" (UniqueName: \"kubernetes.io/projected/58055cab-656a-46f2-a3e6-ab76d8943362-kube-api-access-7fxr9\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.225924 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/58055cab-656a-46f2-a3e6-ab76d8943362-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.327613 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/58055cab-656a-46f2-a3e6-ab76d8943362-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.327738 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/58055cab-656a-46f2-a3e6-ab76d8943362-sasl-config\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.327804 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fxr9\" (UniqueName: \"kubernetes.io/projected/58055cab-656a-46f2-a3e6-ab76d8943362-kube-api-access-7fxr9\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.328244 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/58055cab-656a-46f2-a3e6-ab76d8943362-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.328687 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/58055cab-656a-46f2-a3e6-ab76d8943362-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.329016 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/58055cab-656a-46f2-a3e6-ab76d8943362-sasl-config\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.329491 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/58055cab-656a-46f2-a3e6-ab76d8943362-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.329563 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/58055cab-656a-46f2-a3e6-ab76d8943362-sasl-users\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.337846 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/58055cab-656a-46f2-a3e6-ab76d8943362-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.350420 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/58055cab-656a-46f2-a3e6-ab76d8943362-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.350452 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fxr9\" (UniqueName: \"kubernetes.io/projected/58055cab-656a-46f2-a3e6-ab76d8943362-kube-api-access-7fxr9\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.351460 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/58055cab-656a-46f2-a3e6-ab76d8943362-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.357093 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/58055cab-656a-46f2-a3e6-ab76d8943362-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.358069 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/58055cab-656a-46f2-a3e6-ab76d8943362-sasl-users\") pod \"default-interconnect-68864d46cb-76s9x\" (UID: \"58055cab-656a-46f2-a3e6-ab76d8943362\") " pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.455177 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-76s9x" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.741780 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-76s9x"] Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.778359 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.780223 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.792570 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.801247 4824 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.801458 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.940205 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/ac546686-8945-46b8-8577-da344c7517bd-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"ac546686-8945-46b8-8577-da344c7517bd\") " pod="service-telemetry/qdr-test" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.940271 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgsvq\" (UniqueName: \"kubernetes.io/projected/ac546686-8945-46b8-8577-da344c7517bd-kube-api-access-bgsvq\") pod \"qdr-test\" (UID: \"ac546686-8945-46b8-8577-da344c7517bd\") " pod="service-telemetry/qdr-test" Feb 24 00:31:35 crc kubenswrapper[4824]: I0224 00:31:35.940318 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/ac546686-8945-46b8-8577-da344c7517bd-qdr-test-config\") pod \"qdr-test\" (UID: \"ac546686-8945-46b8-8577-da344c7517bd\") " pod="service-telemetry/qdr-test" Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.041832 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/ac546686-8945-46b8-8577-da344c7517bd-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"ac546686-8945-46b8-8577-da344c7517bd\") " pod="service-telemetry/qdr-test" Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.041905 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgsvq\" (UniqueName: \"kubernetes.io/projected/ac546686-8945-46b8-8577-da344c7517bd-kube-api-access-bgsvq\") pod \"qdr-test\" (UID: \"ac546686-8945-46b8-8577-da344c7517bd\") " pod="service-telemetry/qdr-test" Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.041960 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/ac546686-8945-46b8-8577-da344c7517bd-qdr-test-config\") pod \"qdr-test\" (UID: \"ac546686-8945-46b8-8577-da344c7517bd\") " pod="service-telemetry/qdr-test" Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.042909 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/ac546686-8945-46b8-8577-da344c7517bd-qdr-test-config\") pod \"qdr-test\" (UID: \"ac546686-8945-46b8-8577-da344c7517bd\") " pod="service-telemetry/qdr-test" Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.049089 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/ac546686-8945-46b8-8577-da344c7517bd-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"ac546686-8945-46b8-8577-da344c7517bd\") " pod="service-telemetry/qdr-test" Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.063159 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgsvq\" (UniqueName: \"kubernetes.io/projected/ac546686-8945-46b8-8577-da344c7517bd-kube-api-access-bgsvq\") pod \"qdr-test\" (UID: \"ac546686-8945-46b8-8577-da344c7517bd\") " pod="service-telemetry/qdr-test" Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.114454 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.214224 4824 generic.go:334] "Generic (PLEG): container finished" podID="25d3b43f-0bff-44ca-83f4-b8a0052cd764" containerID="580e4473703a9287d77dfa0282262b828fdf928feb45462d2929a259ad5db38e" exitCode=0 Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.214319 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" event={"ID":"25d3b43f-0bff-44ca-83f4-b8a0052cd764","Type":"ContainerDied","Data":"580e4473703a9287d77dfa0282262b828fdf928feb45462d2929a259ad5db38e"} Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.214375 4824 scope.go:117] "RemoveContainer" containerID="a300490310e1c3ebb42c85c5a7fa61913de98e5750b6b6f122bc2e555fca562a" Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.215255 4824 scope.go:117] "RemoveContainer" containerID="580e4473703a9287d77dfa0282262b828fdf928feb45462d2929a259ad5db38e" Feb 24 00:31:36 crc kubenswrapper[4824]: E0224 00:31:36.215562 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84_service-telemetry(25d3b43f-0bff-44ca-83f4-b8a0052cd764)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" podUID="25d3b43f-0bff-44ca-83f4-b8a0052cd764" Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.218066 4824 generic.go:334] "Generic (PLEG): container finished" podID="b1b6fe19-ad2f-490e-80dc-39ed80de85b3" containerID="9daa99339d0f6cbad236603de0bc2f34e81e039ff7ee664ab10491e20bc297e0" exitCode=0 Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.218154 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" event={"ID":"b1b6fe19-ad2f-490e-80dc-39ed80de85b3","Type":"ContainerDied","Data":"9daa99339d0f6cbad236603de0bc2f34e81e039ff7ee664ab10491e20bc297e0"} Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.219106 4824 scope.go:117] "RemoveContainer" containerID="9daa99339d0f6cbad236603de0bc2f34e81e039ff7ee664ab10491e20bc297e0" Feb 24 00:31:36 crc kubenswrapper[4824]: E0224 00:31:36.219368 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw_service-telemetry(b1b6fe19-ad2f-490e-80dc-39ed80de85b3)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" podUID="b1b6fe19-ad2f-490e-80dc-39ed80de85b3" Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.222095 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-76s9x" event={"ID":"58055cab-656a-46f2-a3e6-ab76d8943362","Type":"ContainerStarted","Data":"9d91402f05c5c2c741c970ba847c0d642086a04954d84feb7a08ab20acbd6174"} Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.227452 4824 generic.go:334] "Generic (PLEG): container finished" podID="99b264a5-5103-4445-8978-942c71208377" containerID="895f0c841b08d7b22272f1ad5924dc1a69f338d44d989cee670afa4ce10c5b44" exitCode=0 Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.227582 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" event={"ID":"99b264a5-5103-4445-8978-942c71208377","Type":"ContainerDied","Data":"895f0c841b08d7b22272f1ad5924dc1a69f338d44d989cee670afa4ce10c5b44"} Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.230501 4824 scope.go:117] "RemoveContainer" containerID="895f0c841b08d7b22272f1ad5924dc1a69f338d44d989cee670afa4ce10c5b44" Feb 24 00:31:36 crc kubenswrapper[4824]: E0224 00:31:36.230909 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw_service-telemetry(99b264a5-5103-4445-8978-942c71208377)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" podUID="99b264a5-5103-4445-8978-942c71208377" Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.261867 4824 generic.go:334] "Generic (PLEG): container finished" podID="19cb1d3e-5363-406a-a5f4-ecfe04edd347" containerID="eeddf94c39afeaf91d3aad6e7e7b037e372c6ac5b7c06815ce6a33281b134212" exitCode=0 Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.261955 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" event={"ID":"19cb1d3e-5363-406a-a5f4-ecfe04edd347","Type":"ContainerDied","Data":"eeddf94c39afeaf91d3aad6e7e7b037e372c6ac5b7c06815ce6a33281b134212"} Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.263192 4824 scope.go:117] "RemoveContainer" containerID="eeddf94c39afeaf91d3aad6e7e7b037e372c6ac5b7c06815ce6a33281b134212" Feb 24 00:31:36 crc kubenswrapper[4824]: E0224 00:31:36.263599 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc_service-telemetry(19cb1d3e-5363-406a-a5f4-ecfe04edd347)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" podUID="19cb1d3e-5363-406a-a5f4-ecfe04edd347" Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.713285 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.716228 4824 scope.go:117] "RemoveContainer" containerID="4cf792676db082f4deec5a1cc01c8abbce90a283e92e03176c0a5e3f90a5aa8c" Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.764333 4824 scope.go:117] "RemoveContainer" containerID="46e1822d839098966fca9d2fe9fd88f95716adc7230f55dbadcd96a250ec0b01" Feb 24 00:31:36 crc kubenswrapper[4824]: I0224 00:31:36.857256 4824 scope.go:117] "RemoveContainer" containerID="5d52767b4ff2eefa3a301cf99e13e98ce67da689c9800b168e08d2bdb25f9f50" Feb 24 00:31:37 crc kubenswrapper[4824]: I0224 00:31:37.275111 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-76s9x" event={"ID":"58055cab-656a-46f2-a3e6-ab76d8943362","Type":"ContainerStarted","Data":"1a6de1ba13a52bd2895d1002ff899b6d430bbd99bd0843e8f204e32a4359d3d3"} Feb 24 00:31:37 crc kubenswrapper[4824]: I0224 00:31:37.277204 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"ac546686-8945-46b8-8577-da344c7517bd","Type":"ContainerStarted","Data":"d8d4046d29485a031700a6ba696d966708022ce68163fa7c4c11586b2c58bfed"} Feb 24 00:31:37 crc kubenswrapper[4824]: I0224 00:31:37.294547 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" event={"ID":"77c39bbc-adcc-40f9-afe2-9d97f93262b9","Type":"ContainerStarted","Data":"deeef59fe5eb3f8fdeb082ee224fb29bc82dce17d1d820fb64c7e338965f4541"} Feb 24 00:31:37 crc kubenswrapper[4824]: I0224 00:31:37.308110 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-76s9x" podStartSLOduration=4.308087113 podStartE2EDuration="4.308087113s" podCreationTimestamp="2026-02-24 00:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:31:37.300080024 +0000 UTC m=+1561.289704513" watchObservedRunningTime="2026-02-24 00:31:37.308087113 +0000 UTC m=+1561.297711582" Feb 24 00:31:38 crc kubenswrapper[4824]: I0224 00:31:38.329285 4824 generic.go:334] "Generic (PLEG): container finished" podID="77c39bbc-adcc-40f9-afe2-9d97f93262b9" containerID="deeef59fe5eb3f8fdeb082ee224fb29bc82dce17d1d820fb64c7e338965f4541" exitCode=0 Feb 24 00:31:38 crc kubenswrapper[4824]: I0224 00:31:38.329365 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" event={"ID":"77c39bbc-adcc-40f9-afe2-9d97f93262b9","Type":"ContainerDied","Data":"deeef59fe5eb3f8fdeb082ee224fb29bc82dce17d1d820fb64c7e338965f4541"} Feb 24 00:31:38 crc kubenswrapper[4824]: I0224 00:31:38.329889 4824 scope.go:117] "RemoveContainer" containerID="c4a8fb8018686ea387d812e3112b01226a43b249311b6283fbdc21bb7472950f" Feb 24 00:31:38 crc kubenswrapper[4824]: I0224 00:31:38.330598 4824 scope.go:117] "RemoveContainer" containerID="deeef59fe5eb3f8fdeb082ee224fb29bc82dce17d1d820fb64c7e338965f4541" Feb 24 00:31:38 crc kubenswrapper[4824]: E0224 00:31:38.330991 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt_service-telemetry(77c39bbc-adcc-40f9-afe2-9d97f93262b9)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" podUID="77c39bbc-adcc-40f9-afe2-9d97f93262b9" Feb 24 00:31:46 crc kubenswrapper[4824]: I0224 00:31:46.699436 4824 scope.go:117] "RemoveContainer" containerID="580e4473703a9287d77dfa0282262b828fdf928feb45462d2929a259ad5db38e" Feb 24 00:31:48 crc kubenswrapper[4824]: I0224 00:31:48.694187 4824 scope.go:117] "RemoveContainer" containerID="9daa99339d0f6cbad236603de0bc2f34e81e039ff7ee664ab10491e20bc297e0" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.469715 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw" event={"ID":"b1b6fe19-ad2f-490e-80dc-39ed80de85b3","Type":"ContainerStarted","Data":"277da33123e68f7a326a3d53ded52b8c20c60ba9b74e4238248092cc640aa3da"} Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.472689 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"ac546686-8945-46b8-8577-da344c7517bd","Type":"ContainerStarted","Data":"4cd0924f21aeb1557ee2b24022a4dfe30dc99b54c6eb87da4165dc54e1ec62cd"} Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.475430 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84" event={"ID":"25d3b43f-0bff-44ca-83f4-b8a0052cd764","Type":"ContainerStarted","Data":"04a94944d394727f386664b963d2693378e6f68d00eb04b32282f1332c648a92"} Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.535847 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.945751584 podStartE2EDuration="14.535811923s" podCreationTimestamp="2026-02-24 00:31:35 +0000 UTC" firstStartedPulling="2026-02-24 00:31:36.743013235 +0000 UTC m=+1560.732637704" lastFinishedPulling="2026-02-24 00:31:48.333073574 +0000 UTC m=+1572.322698043" observedRunningTime="2026-02-24 00:31:49.513902779 +0000 UTC m=+1573.503527268" watchObservedRunningTime="2026-02-24 00:31:49.535811923 +0000 UTC m=+1573.525436392" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.694002 4824 scope.go:117] "RemoveContainer" containerID="eeddf94c39afeaf91d3aad6e7e7b037e372c6ac5b7c06815ce6a33281b134212" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.763175 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-bgdgk"] Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.764639 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.767559 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.767844 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.767978 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.768067 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.774367 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.774367 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.789316 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-bgdgk"] Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.811211 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4kgg\" (UniqueName: \"kubernetes.io/projected/3b209249-9fc7-4266-9089-9a228d1be14a-kube-api-access-s4kgg\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.811299 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-ceilometer-publisher\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.811342 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-collectd-config\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.811390 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.811465 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-sensubility-config\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.811533 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-healthcheck-log\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.811602 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.913453 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.913537 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4kgg\" (UniqueName: \"kubernetes.io/projected/3b209249-9fc7-4266-9089-9a228d1be14a-kube-api-access-s4kgg\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.913582 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-ceilometer-publisher\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.913617 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-collectd-config\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.913652 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.913698 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-sensubility-config\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.913727 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-healthcheck-log\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.914933 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-healthcheck-log\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.915485 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.916450 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-ceilometer-publisher\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.917097 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-collectd-config\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.929728 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-sensubility-config\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.930043 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:49 crc kubenswrapper[4824]: I0224 00:31:49.942374 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4kgg\" (UniqueName: \"kubernetes.io/projected/3b209249-9fc7-4266-9089-9a228d1be14a-kube-api-access-s4kgg\") pod \"stf-smoketest-smoke1-bgdgk\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:50 crc kubenswrapper[4824]: I0224 00:31:50.073128 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Feb 24 00:31:50 crc kubenswrapper[4824]: I0224 00:31:50.074607 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 24 00:31:50 crc kubenswrapper[4824]: I0224 00:31:50.080506 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 24 00:31:50 crc kubenswrapper[4824]: I0224 00:31:50.080725 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:31:50 crc kubenswrapper[4824]: I0224 00:31:50.121188 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84jbw\" (UniqueName: \"kubernetes.io/projected/c03665a8-62b6-481d-a01b-4ce2932d9abb-kube-api-access-84jbw\") pod \"curl\" (UID: \"c03665a8-62b6-481d-a01b-4ce2932d9abb\") " pod="service-telemetry/curl" Feb 24 00:31:50 crc kubenswrapper[4824]: I0224 00:31:50.224029 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84jbw\" (UniqueName: \"kubernetes.io/projected/c03665a8-62b6-481d-a01b-4ce2932d9abb-kube-api-access-84jbw\") pod \"curl\" (UID: \"c03665a8-62b6-481d-a01b-4ce2932d9abb\") " pod="service-telemetry/curl" Feb 24 00:31:50 crc kubenswrapper[4824]: I0224 00:31:50.251292 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84jbw\" (UniqueName: \"kubernetes.io/projected/c03665a8-62b6-481d-a01b-4ce2932d9abb-kube-api-access-84jbw\") pod \"curl\" (UID: \"c03665a8-62b6-481d-a01b-4ce2932d9abb\") " pod="service-telemetry/curl" Feb 24 00:31:50 crc kubenswrapper[4824]: W0224 00:31:50.478095 4824 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b209249_9fc7_4266_9089_9a228d1be14a.slice/crio-508749b5e1ef331b99876faa1b1b095b9f3d7ad6af9e6509ba2456478ab33bef WatchSource:0}: Error finding container 508749b5e1ef331b99876faa1b1b095b9f3d7ad6af9e6509ba2456478ab33bef: Status 404 returned error can't find the container with id 508749b5e1ef331b99876faa1b1b095b9f3d7ad6af9e6509ba2456478ab33bef Feb 24 00:31:50 crc kubenswrapper[4824]: I0224 00:31:50.488207 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-bgdgk"] Feb 24 00:31:50 crc kubenswrapper[4824]: I0224 00:31:50.490969 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc" event={"ID":"19cb1d3e-5363-406a-a5f4-ecfe04edd347","Type":"ContainerStarted","Data":"46bfc54547b0d47a2ad1ebff3e2b4886bc99743c52a2ef8de6022717d2214c15"} Feb 24 00:31:50 crc kubenswrapper[4824]: I0224 00:31:50.503903 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-bgdgk" event={"ID":"3b209249-9fc7-4266-9089-9a228d1be14a","Type":"ContainerStarted","Data":"508749b5e1ef331b99876faa1b1b095b9f3d7ad6af9e6509ba2456478ab33bef"} Feb 24 00:31:50 crc kubenswrapper[4824]: I0224 00:31:50.507369 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 24 00:31:50 crc kubenswrapper[4824]: I0224 00:31:50.694288 4824 scope.go:117] "RemoveContainer" containerID="895f0c841b08d7b22272f1ad5924dc1a69f338d44d989cee670afa4ce10c5b44" Feb 24 00:31:50 crc kubenswrapper[4824]: I0224 00:31:50.759760 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 24 00:31:51 crc kubenswrapper[4824]: I0224 00:31:51.513536 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"c03665a8-62b6-481d-a01b-4ce2932d9abb","Type":"ContainerStarted","Data":"c74452911c8393037ee145106df2c4df90e84349d452289637be6905cc800c2b"} Feb 24 00:31:51 crc kubenswrapper[4824]: I0224 00:31:51.517075 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw" event={"ID":"99b264a5-5103-4445-8978-942c71208377","Type":"ContainerStarted","Data":"92cf2fcd4ded94a1c20ae4699ffcee082dda992896e77f610d94dda5872087eb"} Feb 24 00:31:53 crc kubenswrapper[4824]: I0224 00:31:53.275993 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:31:53 crc kubenswrapper[4824]: I0224 00:31:53.276534 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:31:53 crc kubenswrapper[4824]: I0224 00:31:53.276603 4824 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" Feb 24 00:31:53 crc kubenswrapper[4824]: I0224 00:31:53.277489 4824 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364"} pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:31:53 crc kubenswrapper[4824]: I0224 00:31:53.277560 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" containerID="cri-o://1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" gracePeriod=600 Feb 24 00:31:53 crc kubenswrapper[4824]: E0224 00:31:53.412223 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:31:53 crc kubenswrapper[4824]: I0224 00:31:53.539396 4824 generic.go:334] "Generic (PLEG): container finished" podID="c03665a8-62b6-481d-a01b-4ce2932d9abb" containerID="a066438f83b79f6d2d2cc0502f171ad43237becee4855645e99d4cfdc391dae2" exitCode=0 Feb 24 00:31:53 crc kubenswrapper[4824]: I0224 00:31:53.539493 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"c03665a8-62b6-481d-a01b-4ce2932d9abb","Type":"ContainerDied","Data":"a066438f83b79f6d2d2cc0502f171ad43237becee4855645e99d4cfdc391dae2"} Feb 24 00:31:53 crc kubenswrapper[4824]: I0224 00:31:53.547786 4824 generic.go:334] "Generic (PLEG): container finished" podID="939ca085-9383-42e6-b7d6-37f101137273" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" exitCode=0 Feb 24 00:31:53 crc kubenswrapper[4824]: I0224 00:31:53.547873 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerDied","Data":"1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364"} Feb 24 00:31:53 crc kubenswrapper[4824]: I0224 00:31:53.547944 4824 scope.go:117] "RemoveContainer" containerID="8d819df51f5c54106cb947aecba467be6a7835d606611afb3e6526ac4d026f80" Feb 24 00:31:53 crc kubenswrapper[4824]: I0224 00:31:53.548938 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:31:53 crc kubenswrapper[4824]: E0224 00:31:53.549301 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:31:53 crc kubenswrapper[4824]: I0224 00:31:53.695841 4824 scope.go:117] "RemoveContainer" containerID="deeef59fe5eb3f8fdeb082ee224fb29bc82dce17d1d820fb64c7e338965f4541" Feb 24 00:31:54 crc kubenswrapper[4824]: I0224 00:31:54.560786 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt" event={"ID":"77c39bbc-adcc-40f9-afe2-9d97f93262b9","Type":"ContainerStarted","Data":"7c4b2444fb7012361af857e8af3e0676908edaf842b85ee8dbff426ef5a1d070"} Feb 24 00:31:55 crc kubenswrapper[4824]: I0224 00:31:55.039286 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 24 00:31:55 crc kubenswrapper[4824]: I0224 00:31:55.203200 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_c03665a8-62b6-481d-a01b-4ce2932d9abb/curl/0.log" Feb 24 00:31:55 crc kubenswrapper[4824]: I0224 00:31:55.233012 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84jbw\" (UniqueName: \"kubernetes.io/projected/c03665a8-62b6-481d-a01b-4ce2932d9abb-kube-api-access-84jbw\") pod \"c03665a8-62b6-481d-a01b-4ce2932d9abb\" (UID: \"c03665a8-62b6-481d-a01b-4ce2932d9abb\") " Feb 24 00:31:55 crc kubenswrapper[4824]: I0224 00:31:55.276854 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03665a8-62b6-481d-a01b-4ce2932d9abb-kube-api-access-84jbw" (OuterVolumeSpecName: "kube-api-access-84jbw") pod "c03665a8-62b6-481d-a01b-4ce2932d9abb" (UID: "c03665a8-62b6-481d-a01b-4ce2932d9abb"). InnerVolumeSpecName "kube-api-access-84jbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:31:55 crc kubenswrapper[4824]: I0224 00:31:55.334862 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84jbw\" (UniqueName: \"kubernetes.io/projected/c03665a8-62b6-481d-a01b-4ce2932d9abb-kube-api-access-84jbw\") on node \"crc\" DevicePath \"\"" Feb 24 00:31:55 crc kubenswrapper[4824]: I0224 00:31:55.495131 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-zxg6n_13d35d6f-04c4-438a-bda9-ce9c4ed84b99/prometheus-webhook-snmp/0.log" Feb 24 00:31:55 crc kubenswrapper[4824]: I0224 00:31:55.572562 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"c03665a8-62b6-481d-a01b-4ce2932d9abb","Type":"ContainerDied","Data":"c74452911c8393037ee145106df2c4df90e84349d452289637be6905cc800c2b"} Feb 24 00:31:55 crc kubenswrapper[4824]: I0224 00:31:55.572611 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c74452911c8393037ee145106df2c4df90e84349d452289637be6905cc800c2b" Feb 24 00:31:55 crc kubenswrapper[4824]: I0224 00:31:55.572692 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 24 00:32:04 crc kubenswrapper[4824]: I0224 00:32:04.648025 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-bgdgk" event={"ID":"3b209249-9fc7-4266-9089-9a228d1be14a","Type":"ContainerStarted","Data":"ee868c29c29041bfc7c890bf03233d4fb588940a093db29326b9e176fdb4a0f4"} Feb 24 00:32:05 crc kubenswrapper[4824]: I0224 00:32:05.694545 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:32:05 crc kubenswrapper[4824]: E0224 00:32:05.695471 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:32:15 crc kubenswrapper[4824]: I0224 00:32:15.754594 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-bgdgk" event={"ID":"3b209249-9fc7-4266-9089-9a228d1be14a","Type":"ContainerStarted","Data":"e39c45acb66eaf25d0796fbc6e40a1a5197765b7c4699af74ae2a62987aca171"} Feb 24 00:32:15 crc kubenswrapper[4824]: I0224 00:32:15.779579 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-bgdgk" podStartSLOduration=2.717686988 podStartE2EDuration="26.779561478s" podCreationTimestamp="2026-02-24 00:31:49 +0000 UTC" firstStartedPulling="2026-02-24 00:31:50.481197449 +0000 UTC m=+1574.470821918" lastFinishedPulling="2026-02-24 00:32:14.543071939 +0000 UTC m=+1598.532696408" observedRunningTime="2026-02-24 00:32:15.773966899 +0000 UTC m=+1599.763591488" watchObservedRunningTime="2026-02-24 00:32:15.779561478 +0000 UTC m=+1599.769185947" Feb 24 00:32:19 crc kubenswrapper[4824]: I0224 00:32:19.693823 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:32:19 crc kubenswrapper[4824]: E0224 00:32:19.694911 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:32:25 crc kubenswrapper[4824]: I0224 00:32:25.662475 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-zxg6n_13d35d6f-04c4-438a-bda9-ce9c4ed84b99/prometheus-webhook-snmp/0.log" Feb 24 00:32:30 crc kubenswrapper[4824]: I0224 00:32:30.799569 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lwznq"] Feb 24 00:32:30 crc kubenswrapper[4824]: E0224 00:32:30.800700 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c03665a8-62b6-481d-a01b-4ce2932d9abb" containerName="curl" Feb 24 00:32:30 crc kubenswrapper[4824]: I0224 00:32:30.800716 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="c03665a8-62b6-481d-a01b-4ce2932d9abb" containerName="curl" Feb 24 00:32:30 crc kubenswrapper[4824]: I0224 00:32:30.800871 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="c03665a8-62b6-481d-a01b-4ce2932d9abb" containerName="curl" Feb 24 00:32:30 crc kubenswrapper[4824]: I0224 00:32:30.802763 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lwznq" Feb 24 00:32:30 crc kubenswrapper[4824]: I0224 00:32:30.825401 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lwznq"] Feb 24 00:32:30 crc kubenswrapper[4824]: I0224 00:32:30.999111 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea-utilities\") pod \"community-operators-lwznq\" (UID: \"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea\") " pod="openshift-marketplace/community-operators-lwznq" Feb 24 00:32:30 crc kubenswrapper[4824]: I0224 00:32:30.999178 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea-catalog-content\") pod \"community-operators-lwznq\" (UID: \"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea\") " pod="openshift-marketplace/community-operators-lwznq" Feb 24 00:32:31 crc kubenswrapper[4824]: I0224 00:32:30.999229 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf4q9\" (UniqueName: \"kubernetes.io/projected/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea-kube-api-access-kf4q9\") pod \"community-operators-lwznq\" (UID: \"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea\") " pod="openshift-marketplace/community-operators-lwznq" Feb 24 00:32:31 crc kubenswrapper[4824]: I0224 00:32:31.101753 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea-utilities\") pod \"community-operators-lwznq\" (UID: \"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea\") " pod="openshift-marketplace/community-operators-lwznq" Feb 24 00:32:31 crc kubenswrapper[4824]: I0224 00:32:31.102339 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea-catalog-content\") pod \"community-operators-lwznq\" (UID: \"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea\") " pod="openshift-marketplace/community-operators-lwznq" Feb 24 00:32:31 crc kubenswrapper[4824]: I0224 00:32:31.102375 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf4q9\" (UniqueName: \"kubernetes.io/projected/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea-kube-api-access-kf4q9\") pod \"community-operators-lwznq\" (UID: \"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea\") " pod="openshift-marketplace/community-operators-lwznq" Feb 24 00:32:31 crc kubenswrapper[4824]: I0224 00:32:31.102554 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea-utilities\") pod \"community-operators-lwznq\" (UID: \"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea\") " pod="openshift-marketplace/community-operators-lwznq" Feb 24 00:32:31 crc kubenswrapper[4824]: I0224 00:32:31.102975 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea-catalog-content\") pod \"community-operators-lwznq\" (UID: \"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea\") " pod="openshift-marketplace/community-operators-lwznq" Feb 24 00:32:31 crc kubenswrapper[4824]: I0224 00:32:31.134439 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf4q9\" (UniqueName: \"kubernetes.io/projected/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea-kube-api-access-kf4q9\") pod \"community-operators-lwznq\" (UID: \"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea\") " pod="openshift-marketplace/community-operators-lwznq" Feb 24 00:32:31 crc kubenswrapper[4824]: I0224 00:32:31.433828 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lwznq" Feb 24 00:32:31 crc kubenswrapper[4824]: I0224 00:32:31.700196 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:32:31 crc kubenswrapper[4824]: E0224 00:32:31.700966 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:32:31 crc kubenswrapper[4824]: I0224 00:32:31.742712 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lwznq"] Feb 24 00:32:31 crc kubenswrapper[4824]: I0224 00:32:31.902505 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lwznq" event={"ID":"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea","Type":"ContainerStarted","Data":"a57b0ccc6fb7339c031ffe79a75a2452b49fe5975dbc37e0e6d1e4a1c43a1a3c"} Feb 24 00:32:32 crc kubenswrapper[4824]: I0224 00:32:32.912816 4824 generic.go:334] "Generic (PLEG): container finished" podID="2fc5db4c-2be5-4a26-8e98-c4fb482b1cea" containerID="f1791b119fda8a76356ec44d4cb941205460e154a17e6bf25478d2d193834b3a" exitCode=0 Feb 24 00:32:32 crc kubenswrapper[4824]: I0224 00:32:32.912897 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lwznq" event={"ID":"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea","Type":"ContainerDied","Data":"f1791b119fda8a76356ec44d4cb941205460e154a17e6bf25478d2d193834b3a"} Feb 24 00:32:33 crc kubenswrapper[4824]: I0224 00:32:33.921926 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lwznq" event={"ID":"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea","Type":"ContainerStarted","Data":"1105fc0199081feabbc77e0aa8649e4fc6931a85dd0bd0a288d174bf9d62190b"} Feb 24 00:32:34 crc kubenswrapper[4824]: I0224 00:32:34.932858 4824 generic.go:334] "Generic (PLEG): container finished" podID="2fc5db4c-2be5-4a26-8e98-c4fb482b1cea" containerID="1105fc0199081feabbc77e0aa8649e4fc6931a85dd0bd0a288d174bf9d62190b" exitCode=0 Feb 24 00:32:34 crc kubenswrapper[4824]: I0224 00:32:34.932952 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lwznq" event={"ID":"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea","Type":"ContainerDied","Data":"1105fc0199081feabbc77e0aa8649e4fc6931a85dd0bd0a288d174bf9d62190b"} Feb 24 00:32:35 crc kubenswrapper[4824]: I0224 00:32:35.942973 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lwznq" event={"ID":"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea","Type":"ContainerStarted","Data":"e6fdead5bb2143b4d1e75d494c8e4c428dc0fa6bb4c070ad94edff3cba2dd209"} Feb 24 00:32:35 crc kubenswrapper[4824]: I0224 00:32:35.968795 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lwznq" podStartSLOduration=3.376302698 podStartE2EDuration="5.968773122s" podCreationTimestamp="2026-02-24 00:32:30 +0000 UTC" firstStartedPulling="2026-02-24 00:32:32.915381737 +0000 UTC m=+1616.905006206" lastFinishedPulling="2026-02-24 00:32:35.507852161 +0000 UTC m=+1619.497476630" observedRunningTime="2026-02-24 00:32:35.963204513 +0000 UTC m=+1619.952828982" watchObservedRunningTime="2026-02-24 00:32:35.968773122 +0000 UTC m=+1619.958397601" Feb 24 00:32:38 crc kubenswrapper[4824]: I0224 00:32:38.991209 4824 generic.go:334] "Generic (PLEG): container finished" podID="3b209249-9fc7-4266-9089-9a228d1be14a" containerID="ee868c29c29041bfc7c890bf03233d4fb588940a093db29326b9e176fdb4a0f4" exitCode=0 Feb 24 00:32:38 crc kubenswrapper[4824]: I0224 00:32:38.991282 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-bgdgk" event={"ID":"3b209249-9fc7-4266-9089-9a228d1be14a","Type":"ContainerDied","Data":"ee868c29c29041bfc7c890bf03233d4fb588940a093db29326b9e176fdb4a0f4"} Feb 24 00:32:38 crc kubenswrapper[4824]: I0224 00:32:38.992628 4824 scope.go:117] "RemoveContainer" containerID="ee868c29c29041bfc7c890bf03233d4fb588940a093db29326b9e176fdb4a0f4" Feb 24 00:32:41 crc kubenswrapper[4824]: I0224 00:32:41.434424 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lwznq" Feb 24 00:32:41 crc kubenswrapper[4824]: I0224 00:32:41.435639 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lwznq" Feb 24 00:32:41 crc kubenswrapper[4824]: I0224 00:32:41.477726 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lwznq" Feb 24 00:32:42 crc kubenswrapper[4824]: I0224 00:32:42.054833 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lwznq" Feb 24 00:32:42 crc kubenswrapper[4824]: I0224 00:32:42.114604 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lwznq"] Feb 24 00:32:42 crc kubenswrapper[4824]: I0224 00:32:42.693593 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:32:42 crc kubenswrapper[4824]: E0224 00:32:42.693871 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:32:44 crc kubenswrapper[4824]: I0224 00:32:44.033074 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lwznq" podUID="2fc5db4c-2be5-4a26-8e98-c4fb482b1cea" containerName="registry-server" containerID="cri-o://e6fdead5bb2143b4d1e75d494c8e4c428dc0fa6bb4c070ad94edff3cba2dd209" gracePeriod=2 Feb 24 00:32:44 crc kubenswrapper[4824]: I0224 00:32:44.473230 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lwznq" Feb 24 00:32:44 crc kubenswrapper[4824]: I0224 00:32:44.549724 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf4q9\" (UniqueName: \"kubernetes.io/projected/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea-kube-api-access-kf4q9\") pod \"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea\" (UID: \"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea\") " Feb 24 00:32:44 crc kubenswrapper[4824]: I0224 00:32:44.549875 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea-utilities\") pod \"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea\" (UID: \"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea\") " Feb 24 00:32:44 crc kubenswrapper[4824]: I0224 00:32:44.549928 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea-catalog-content\") pod \"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea\" (UID: \"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea\") " Feb 24 00:32:44 crc kubenswrapper[4824]: I0224 00:32:44.551114 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea-utilities" (OuterVolumeSpecName: "utilities") pod "2fc5db4c-2be5-4a26-8e98-c4fb482b1cea" (UID: "2fc5db4c-2be5-4a26-8e98-c4fb482b1cea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:32:44 crc kubenswrapper[4824]: I0224 00:32:44.556445 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea-kube-api-access-kf4q9" (OuterVolumeSpecName: "kube-api-access-kf4q9") pod "2fc5db4c-2be5-4a26-8e98-c4fb482b1cea" (UID: "2fc5db4c-2be5-4a26-8e98-c4fb482b1cea"). InnerVolumeSpecName "kube-api-access-kf4q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:32:44 crc kubenswrapper[4824]: I0224 00:32:44.612637 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fc5db4c-2be5-4a26-8e98-c4fb482b1cea" (UID: "2fc5db4c-2be5-4a26-8e98-c4fb482b1cea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:32:44 crc kubenswrapper[4824]: I0224 00:32:44.652135 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:32:44 crc kubenswrapper[4824]: I0224 00:32:44.652188 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:32:44 crc kubenswrapper[4824]: I0224 00:32:44.652207 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf4q9\" (UniqueName: \"kubernetes.io/projected/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea-kube-api-access-kf4q9\") on node \"crc\" DevicePath \"\"" Feb 24 00:32:45 crc kubenswrapper[4824]: I0224 00:32:45.044625 4824 generic.go:334] "Generic (PLEG): container finished" podID="2fc5db4c-2be5-4a26-8e98-c4fb482b1cea" containerID="e6fdead5bb2143b4d1e75d494c8e4c428dc0fa6bb4c070ad94edff3cba2dd209" exitCode=0 Feb 24 00:32:45 crc kubenswrapper[4824]: I0224 00:32:45.044670 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lwznq" event={"ID":"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea","Type":"ContainerDied","Data":"e6fdead5bb2143b4d1e75d494c8e4c428dc0fa6bb4c070ad94edff3cba2dd209"} Feb 24 00:32:45 crc kubenswrapper[4824]: I0224 00:32:45.044734 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lwznq" Feb 24 00:32:45 crc kubenswrapper[4824]: I0224 00:32:45.044765 4824 scope.go:117] "RemoveContainer" containerID="e6fdead5bb2143b4d1e75d494c8e4c428dc0fa6bb4c070ad94edff3cba2dd209" Feb 24 00:32:45 crc kubenswrapper[4824]: I0224 00:32:45.044748 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lwznq" event={"ID":"2fc5db4c-2be5-4a26-8e98-c4fb482b1cea","Type":"ContainerDied","Data":"a57b0ccc6fb7339c031ffe79a75a2452b49fe5975dbc37e0e6d1e4a1c43a1a3c"} Feb 24 00:32:45 crc kubenswrapper[4824]: I0224 00:32:45.069949 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lwznq"] Feb 24 00:32:45 crc kubenswrapper[4824]: I0224 00:32:45.073906 4824 scope.go:117] "RemoveContainer" containerID="1105fc0199081feabbc77e0aa8649e4fc6931a85dd0bd0a288d174bf9d62190b" Feb 24 00:32:45 crc kubenswrapper[4824]: I0224 00:32:45.078592 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lwznq"] Feb 24 00:32:45 crc kubenswrapper[4824]: I0224 00:32:45.099258 4824 scope.go:117] "RemoveContainer" containerID="f1791b119fda8a76356ec44d4cb941205460e154a17e6bf25478d2d193834b3a" Feb 24 00:32:45 crc kubenswrapper[4824]: I0224 00:32:45.130270 4824 scope.go:117] "RemoveContainer" containerID="e6fdead5bb2143b4d1e75d494c8e4c428dc0fa6bb4c070ad94edff3cba2dd209" Feb 24 00:32:45 crc kubenswrapper[4824]: E0224 00:32:45.132112 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6fdead5bb2143b4d1e75d494c8e4c428dc0fa6bb4c070ad94edff3cba2dd209\": container with ID starting with e6fdead5bb2143b4d1e75d494c8e4c428dc0fa6bb4c070ad94edff3cba2dd209 not found: ID does not exist" containerID="e6fdead5bb2143b4d1e75d494c8e4c428dc0fa6bb4c070ad94edff3cba2dd209" Feb 24 00:32:45 crc kubenswrapper[4824]: I0224 00:32:45.132165 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6fdead5bb2143b4d1e75d494c8e4c428dc0fa6bb4c070ad94edff3cba2dd209"} err="failed to get container status \"e6fdead5bb2143b4d1e75d494c8e4c428dc0fa6bb4c070ad94edff3cba2dd209\": rpc error: code = NotFound desc = could not find container \"e6fdead5bb2143b4d1e75d494c8e4c428dc0fa6bb4c070ad94edff3cba2dd209\": container with ID starting with e6fdead5bb2143b4d1e75d494c8e4c428dc0fa6bb4c070ad94edff3cba2dd209 not found: ID does not exist" Feb 24 00:32:45 crc kubenswrapper[4824]: I0224 00:32:45.132204 4824 scope.go:117] "RemoveContainer" containerID="1105fc0199081feabbc77e0aa8649e4fc6931a85dd0bd0a288d174bf9d62190b" Feb 24 00:32:45 crc kubenswrapper[4824]: E0224 00:32:45.132888 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1105fc0199081feabbc77e0aa8649e4fc6931a85dd0bd0a288d174bf9d62190b\": container with ID starting with 1105fc0199081feabbc77e0aa8649e4fc6931a85dd0bd0a288d174bf9d62190b not found: ID does not exist" containerID="1105fc0199081feabbc77e0aa8649e4fc6931a85dd0bd0a288d174bf9d62190b" Feb 24 00:32:45 crc kubenswrapper[4824]: I0224 00:32:45.132947 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1105fc0199081feabbc77e0aa8649e4fc6931a85dd0bd0a288d174bf9d62190b"} err="failed to get container status \"1105fc0199081feabbc77e0aa8649e4fc6931a85dd0bd0a288d174bf9d62190b\": rpc error: code = NotFound desc = could not find container \"1105fc0199081feabbc77e0aa8649e4fc6931a85dd0bd0a288d174bf9d62190b\": container with ID starting with 1105fc0199081feabbc77e0aa8649e4fc6931a85dd0bd0a288d174bf9d62190b not found: ID does not exist" Feb 24 00:32:45 crc kubenswrapper[4824]: I0224 00:32:45.132994 4824 scope.go:117] "RemoveContainer" containerID="f1791b119fda8a76356ec44d4cb941205460e154a17e6bf25478d2d193834b3a" Feb 24 00:32:45 crc kubenswrapper[4824]: E0224 00:32:45.133493 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1791b119fda8a76356ec44d4cb941205460e154a17e6bf25478d2d193834b3a\": container with ID starting with f1791b119fda8a76356ec44d4cb941205460e154a17e6bf25478d2d193834b3a not found: ID does not exist" containerID="f1791b119fda8a76356ec44d4cb941205460e154a17e6bf25478d2d193834b3a" Feb 24 00:32:45 crc kubenswrapper[4824]: I0224 00:32:45.133544 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1791b119fda8a76356ec44d4cb941205460e154a17e6bf25478d2d193834b3a"} err="failed to get container status \"f1791b119fda8a76356ec44d4cb941205460e154a17e6bf25478d2d193834b3a\": rpc error: code = NotFound desc = could not find container \"f1791b119fda8a76356ec44d4cb941205460e154a17e6bf25478d2d193834b3a\": container with ID starting with f1791b119fda8a76356ec44d4cb941205460e154a17e6bf25478d2d193834b3a not found: ID does not exist" Feb 24 00:32:46 crc kubenswrapper[4824]: I0224 00:32:46.702713 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fc5db4c-2be5-4a26-8e98-c4fb482b1cea" path="/var/lib/kubelet/pods/2fc5db4c-2be5-4a26-8e98-c4fb482b1cea/volumes" Feb 24 00:32:47 crc kubenswrapper[4824]: I0224 00:32:47.066997 4824 generic.go:334] "Generic (PLEG): container finished" podID="3b209249-9fc7-4266-9089-9a228d1be14a" containerID="e39c45acb66eaf25d0796fbc6e40a1a5197765b7c4699af74ae2a62987aca171" exitCode=0 Feb 24 00:32:47 crc kubenswrapper[4824]: I0224 00:32:47.067065 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-bgdgk" event={"ID":"3b209249-9fc7-4266-9089-9a228d1be14a","Type":"ContainerDied","Data":"e39c45acb66eaf25d0796fbc6e40a1a5197765b7c4699af74ae2a62987aca171"} Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.339101 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.421430 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-ceilometer-entrypoint-script\") pod \"3b209249-9fc7-4266-9089-9a228d1be14a\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.421544 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4kgg\" (UniqueName: \"kubernetes.io/projected/3b209249-9fc7-4266-9089-9a228d1be14a-kube-api-access-s4kgg\") pod \"3b209249-9fc7-4266-9089-9a228d1be14a\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.421647 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-collectd-config\") pod \"3b209249-9fc7-4266-9089-9a228d1be14a\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.421792 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-ceilometer-publisher\") pod \"3b209249-9fc7-4266-9089-9a228d1be14a\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.421814 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-collectd-entrypoint-script\") pod \"3b209249-9fc7-4266-9089-9a228d1be14a\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.421873 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-healthcheck-log\") pod \"3b209249-9fc7-4266-9089-9a228d1be14a\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.421897 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-sensubility-config\") pod \"3b209249-9fc7-4266-9089-9a228d1be14a\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.430109 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b209249-9fc7-4266-9089-9a228d1be14a-kube-api-access-s4kgg" (OuterVolumeSpecName: "kube-api-access-s4kgg") pod "3b209249-9fc7-4266-9089-9a228d1be14a" (UID: "3b209249-9fc7-4266-9089-9a228d1be14a"). InnerVolumeSpecName "kube-api-access-s4kgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.444071 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "3b209249-9fc7-4266-9089-9a228d1be14a" (UID: "3b209249-9fc7-4266-9089-9a228d1be14a"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.444094 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "3b209249-9fc7-4266-9089-9a228d1be14a" (UID: "3b209249-9fc7-4266-9089-9a228d1be14a"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.450848 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "3b209249-9fc7-4266-9089-9a228d1be14a" (UID: "3b209249-9fc7-4266-9089-9a228d1be14a"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.454155 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "3b209249-9fc7-4266-9089-9a228d1be14a" (UID: "3b209249-9fc7-4266-9089-9a228d1be14a"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:32:48 crc kubenswrapper[4824]: E0224 00:32:48.473551 4824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-ceilometer-entrypoint-script podName:3b209249-9fc7-4266-9089-9a228d1be14a nodeName:}" failed. No retries permitted until 2026-02-24 00:32:48.973469401 +0000 UTC m=+1632.963093870 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ceilometer-entrypoint-script" (UniqueName: "kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-ceilometer-entrypoint-script") pod "3b209249-9fc7-4266-9089-9a228d1be14a" (UID: "3b209249-9fc7-4266-9089-9a228d1be14a") : error deleting /var/lib/kubelet/pods/3b209249-9fc7-4266-9089-9a228d1be14a/volume-subpaths: remove /var/lib/kubelet/pods/3b209249-9fc7-4266-9089-9a228d1be14a/volume-subpaths: no such file or directory Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.473804 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "3b209249-9fc7-4266-9089-9a228d1be14a" (UID: "3b209249-9fc7-4266-9089-9a228d1be14a"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.524935 4824 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-healthcheck-log\") on node \"crc\" DevicePath \"\"" Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.524982 4824 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-sensubility-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.524993 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4kgg\" (UniqueName: \"kubernetes.io/projected/3b209249-9fc7-4266-9089-9a228d1be14a-kube-api-access-s4kgg\") on node \"crc\" DevicePath \"\"" Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.525004 4824 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-collectd-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.525013 4824 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Feb 24 00:32:48 crc kubenswrapper[4824]: I0224 00:32:48.525022 4824 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 24 00:32:49 crc kubenswrapper[4824]: I0224 00:32:49.033262 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-ceilometer-entrypoint-script\") pod \"3b209249-9fc7-4266-9089-9a228d1be14a\" (UID: \"3b209249-9fc7-4266-9089-9a228d1be14a\") " Feb 24 00:32:49 crc kubenswrapper[4824]: I0224 00:32:49.033782 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "3b209249-9fc7-4266-9089-9a228d1be14a" (UID: "3b209249-9fc7-4266-9089-9a228d1be14a"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:32:49 crc kubenswrapper[4824]: I0224 00:32:49.034209 4824 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3b209249-9fc7-4266-9089-9a228d1be14a-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 24 00:32:49 crc kubenswrapper[4824]: I0224 00:32:49.086778 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-bgdgk" event={"ID":"3b209249-9fc7-4266-9089-9a228d1be14a","Type":"ContainerDied","Data":"508749b5e1ef331b99876faa1b1b095b9f3d7ad6af9e6509ba2456478ab33bef"} Feb 24 00:32:49 crc kubenswrapper[4824]: I0224 00:32:49.086841 4824 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="508749b5e1ef331b99876faa1b1b095b9f3d7ad6af9e6509ba2456478ab33bef" Feb 24 00:32:49 crc kubenswrapper[4824]: I0224 00:32:49.087330 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-bgdgk" Feb 24 00:32:50 crc kubenswrapper[4824]: I0224 00:32:50.313030 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-bgdgk_3b209249-9fc7-4266-9089-9a228d1be14a/smoketest-collectd/0.log" Feb 24 00:32:50 crc kubenswrapper[4824]: I0224 00:32:50.537908 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-bgdgk_3b209249-9fc7-4266-9089-9a228d1be14a/smoketest-ceilometer/0.log" Feb 24 00:32:50 crc kubenswrapper[4824]: I0224 00:32:50.820394 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-76s9x_58055cab-656a-46f2-a3e6-ab76d8943362/default-interconnect/0.log" Feb 24 00:32:51 crc kubenswrapper[4824]: I0224 00:32:51.074730 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw_b1b6fe19-ad2f-490e-80dc-39ed80de85b3/bridge/2.log" Feb 24 00:32:51 crc kubenswrapper[4824]: I0224 00:32:51.300793 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-68kfw_b1b6fe19-ad2f-490e-80dc-39ed80de85b3/sg-core/0.log" Feb 24 00:32:51 crc kubenswrapper[4824]: I0224 00:32:51.524635 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt_77c39bbc-adcc-40f9-afe2-9d97f93262b9/bridge/2.log" Feb 24 00:32:51 crc kubenswrapper[4824]: I0224 00:32:51.784254 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-7bcfc4c8bf-2d6bt_77c39bbc-adcc-40f9-afe2-9d97f93262b9/sg-core/0.log" Feb 24 00:32:52 crc kubenswrapper[4824]: I0224 00:32:52.060258 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw_99b264a5-5103-4445-8978-942c71208377/bridge/2.log" Feb 24 00:32:52 crc kubenswrapper[4824]: I0224 00:32:52.322445 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-kmkbw_99b264a5-5103-4445-8978-942c71208377/sg-core/0.log" Feb 24 00:32:52 crc kubenswrapper[4824]: I0224 00:32:52.553023 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc_19cb1d3e-5363-406a-a5f4-ecfe04edd347/bridge/2.log" Feb 24 00:32:52 crc kubenswrapper[4824]: I0224 00:32:52.781007 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-6cd878c86f-p97zc_19cb1d3e-5363-406a-a5f4-ecfe04edd347/sg-core/0.log" Feb 24 00:32:52 crc kubenswrapper[4824]: I0224 00:32:52.997896 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84_25d3b43f-0bff-44ca-83f4-b8a0052cd764/bridge/2.log" Feb 24 00:32:53 crc kubenswrapper[4824]: I0224 00:32:53.296156 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-5jd84_25d3b43f-0bff-44ca-83f4-b8a0052cd764/sg-core/0.log" Feb 24 00:32:57 crc kubenswrapper[4824]: I0224 00:32:57.140196 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-755b8777c-j59cx_99d102db-b6a5-428f-acec-1311a225325d/operator/0.log" Feb 24 00:32:57 crc kubenswrapper[4824]: I0224 00:32:57.386452 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_d1d48ccf-0bde-4748-8128-1e82ca1f302a/prometheus/0.log" Feb 24 00:32:57 crc kubenswrapper[4824]: I0224 00:32:57.631499 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_96f9c835-f7c9-4774-9b95-8911ab4ffb23/elasticsearch/0.log" Feb 24 00:32:57 crc kubenswrapper[4824]: I0224 00:32:57.694213 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:32:57 crc kubenswrapper[4824]: E0224 00:32:57.694547 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:32:57 crc kubenswrapper[4824]: I0224 00:32:57.887894 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-zxg6n_13d35d6f-04c4-438a-bda9-ce9c4ed84b99/prometheus-webhook-snmp/0.log" Feb 24 00:32:58 crc kubenswrapper[4824]: I0224 00:32:58.172587 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_b4916ffb-2e83-480a-a12f-ad04c6144517/alertmanager/0.log" Feb 24 00:33:12 crc kubenswrapper[4824]: I0224 00:33:12.186832 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-7f7c584b79-2rbxz_3394aaea-7658-498b-aab1-7494fb832c8f/operator/0.log" Feb 24 00:33:12 crc kubenswrapper[4824]: I0224 00:33:12.693488 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:33:12 crc kubenswrapper[4824]: E0224 00:33:12.693913 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:33:15 crc kubenswrapper[4824]: I0224 00:33:15.853593 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-755b8777c-j59cx_99d102db-b6a5-428f-acec-1311a225325d/operator/0.log" Feb 24 00:33:16 crc kubenswrapper[4824]: I0224 00:33:16.112253 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_ac546686-8945-46b8-8577-da344c7517bd/qdr/0.log" Feb 24 00:33:24 crc kubenswrapper[4824]: I0224 00:33:24.694386 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:33:24 crc kubenswrapper[4824]: E0224 00:33:24.695956 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:33:35 crc kubenswrapper[4824]: I0224 00:33:35.694185 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:33:35 crc kubenswrapper[4824]: E0224 00:33:35.695920 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:33:46 crc kubenswrapper[4824]: I0224 00:33:46.703044 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:33:46 crc kubenswrapper[4824]: E0224 00:33:46.703986 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.452849 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v4984/must-gather-t5kpz"] Feb 24 00:33:49 crc kubenswrapper[4824]: E0224 00:33:49.453911 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b209249-9fc7-4266-9089-9a228d1be14a" containerName="smoketest-collectd" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.453929 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b209249-9fc7-4266-9089-9a228d1be14a" containerName="smoketest-collectd" Feb 24 00:33:49 crc kubenswrapper[4824]: E0224 00:33:49.453950 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc5db4c-2be5-4a26-8e98-c4fb482b1cea" containerName="registry-server" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.453959 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc5db4c-2be5-4a26-8e98-c4fb482b1cea" containerName="registry-server" Feb 24 00:33:49 crc kubenswrapper[4824]: E0224 00:33:49.453975 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc5db4c-2be5-4a26-8e98-c4fb482b1cea" containerName="extract-utilities" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.453984 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc5db4c-2be5-4a26-8e98-c4fb482b1cea" containerName="extract-utilities" Feb 24 00:33:49 crc kubenswrapper[4824]: E0224 00:33:49.454010 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc5db4c-2be5-4a26-8e98-c4fb482b1cea" containerName="extract-content" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.454018 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc5db4c-2be5-4a26-8e98-c4fb482b1cea" containerName="extract-content" Feb 24 00:33:49 crc kubenswrapper[4824]: E0224 00:33:49.454030 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b209249-9fc7-4266-9089-9a228d1be14a" containerName="smoketest-ceilometer" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.454038 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b209249-9fc7-4266-9089-9a228d1be14a" containerName="smoketest-ceilometer" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.454211 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b209249-9fc7-4266-9089-9a228d1be14a" containerName="smoketest-ceilometer" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.454223 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc5db4c-2be5-4a26-8e98-c4fb482b1cea" containerName="registry-server" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.454235 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b209249-9fc7-4266-9089-9a228d1be14a" containerName="smoketest-collectd" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.455236 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4984/must-gather-t5kpz" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.461037 4824 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-v4984"/"default-dockercfg-mp758" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.461395 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v4984"/"openshift-service-ca.crt" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.466957 4824 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v4984"/"kube-root-ca.crt" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.482342 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v4984/must-gather-t5kpz"] Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.516958 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w677s\" (UniqueName: \"kubernetes.io/projected/30551e99-c0dc-480e-8aa4-cfb7df233fa5-kube-api-access-w677s\") pod \"must-gather-t5kpz\" (UID: \"30551e99-c0dc-480e-8aa4-cfb7df233fa5\") " pod="openshift-must-gather-v4984/must-gather-t5kpz" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.517010 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/30551e99-c0dc-480e-8aa4-cfb7df233fa5-must-gather-output\") pod \"must-gather-t5kpz\" (UID: \"30551e99-c0dc-480e-8aa4-cfb7df233fa5\") " pod="openshift-must-gather-v4984/must-gather-t5kpz" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.618869 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w677s\" (UniqueName: \"kubernetes.io/projected/30551e99-c0dc-480e-8aa4-cfb7df233fa5-kube-api-access-w677s\") pod \"must-gather-t5kpz\" (UID: \"30551e99-c0dc-480e-8aa4-cfb7df233fa5\") " pod="openshift-must-gather-v4984/must-gather-t5kpz" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.618930 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/30551e99-c0dc-480e-8aa4-cfb7df233fa5-must-gather-output\") pod \"must-gather-t5kpz\" (UID: \"30551e99-c0dc-480e-8aa4-cfb7df233fa5\") " pod="openshift-must-gather-v4984/must-gather-t5kpz" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.619565 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/30551e99-c0dc-480e-8aa4-cfb7df233fa5-must-gather-output\") pod \"must-gather-t5kpz\" (UID: \"30551e99-c0dc-480e-8aa4-cfb7df233fa5\") " pod="openshift-must-gather-v4984/must-gather-t5kpz" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.649423 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w677s\" (UniqueName: \"kubernetes.io/projected/30551e99-c0dc-480e-8aa4-cfb7df233fa5-kube-api-access-w677s\") pod \"must-gather-t5kpz\" (UID: \"30551e99-c0dc-480e-8aa4-cfb7df233fa5\") " pod="openshift-must-gather-v4984/must-gather-t5kpz" Feb 24 00:33:49 crc kubenswrapper[4824]: I0224 00:33:49.790377 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4984/must-gather-t5kpz" Feb 24 00:33:50 crc kubenswrapper[4824]: I0224 00:33:50.075598 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v4984/must-gather-t5kpz"] Feb 24 00:33:50 crc kubenswrapper[4824]: I0224 00:33:50.582608 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4984/must-gather-t5kpz" event={"ID":"30551e99-c0dc-480e-8aa4-cfb7df233fa5","Type":"ContainerStarted","Data":"6b1ff4420b968d45931a50a401488cf9d06db869860ffded055aa82b666bf6cf"} Feb 24 00:33:58 crc kubenswrapper[4824]: I0224 00:33:58.695088 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:33:58 crc kubenswrapper[4824]: E0224 00:33:58.696426 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:33:59 crc kubenswrapper[4824]: I0224 00:33:59.672572 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4984/must-gather-t5kpz" event={"ID":"30551e99-c0dc-480e-8aa4-cfb7df233fa5","Type":"ContainerStarted","Data":"c82f94371919f040f11f9a25c3e88f21683ea931dd570e2a2fd2964f6a6b29a8"} Feb 24 00:33:59 crc kubenswrapper[4824]: I0224 00:33:59.673016 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4984/must-gather-t5kpz" event={"ID":"30551e99-c0dc-480e-8aa4-cfb7df233fa5","Type":"ContainerStarted","Data":"be48c11ddee87f6376cb573bf02565d2ce45f4f7484d835a87096a80c130fad9"} Feb 24 00:33:59 crc kubenswrapper[4824]: I0224 00:33:59.698196 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v4984/must-gather-t5kpz" podStartSLOduration=2.259803591 podStartE2EDuration="10.698171256s" podCreationTimestamp="2026-02-24 00:33:49 +0000 UTC" firstStartedPulling="2026-02-24 00:33:50.08676884 +0000 UTC m=+1694.076393309" lastFinishedPulling="2026-02-24 00:33:58.525136505 +0000 UTC m=+1702.514760974" observedRunningTime="2026-02-24 00:33:59.69357131 +0000 UTC m=+1703.683195779" watchObservedRunningTime="2026-02-24 00:33:59.698171256 +0000 UTC m=+1703.687795725" Feb 24 00:34:12 crc kubenswrapper[4824]: I0224 00:34:12.693430 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:34:12 crc kubenswrapper[4824]: E0224 00:34:12.694422 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:34:27 crc kubenswrapper[4824]: I0224 00:34:27.694320 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:34:27 crc kubenswrapper[4824]: E0224 00:34:27.695540 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:34:39 crc kubenswrapper[4824]: I0224 00:34:39.694885 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:34:39 crc kubenswrapper[4824]: E0224 00:34:39.696003 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:34:44 crc kubenswrapper[4824]: I0224 00:34:44.972796 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-q8hvw_13bff804-f118-473b-a547-433aed671b46/control-plane-machine-set-operator/0.log" Feb 24 00:34:45 crc kubenswrapper[4824]: I0224 00:34:45.178115 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kh6hg_53344821-2f26-459a-9e42-003f3f1b5a87/kube-rbac-proxy/0.log" Feb 24 00:34:45 crc kubenswrapper[4824]: I0224 00:34:45.188795 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-kh6hg_53344821-2f26-459a-9e42-003f3f1b5a87/machine-api-operator/0.log" Feb 24 00:34:54 crc kubenswrapper[4824]: I0224 00:34:54.694234 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:34:54 crc kubenswrapper[4824]: E0224 00:34:54.695346 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:34:58 crc kubenswrapper[4824]: I0224 00:34:58.377578 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-9mpql_1f370348-c40e-4096-98c1-d681f34b8659/cert-manager-controller/0.log" Feb 24 00:34:58 crc kubenswrapper[4824]: I0224 00:34:58.547745 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-qhzcr_219daf0d-f400-4a2c-8374-5c23e10c27a6/cert-manager-cainjector/0.log" Feb 24 00:34:58 crc kubenswrapper[4824]: I0224 00:34:58.557403 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-m8rqb_ad293038-bf1d-4800-bd32-9488c5f19e95/cert-manager-webhook/0.log" Feb 24 00:35:05 crc kubenswrapper[4824]: I0224 00:35:05.694445 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:35:05 crc kubenswrapper[4824]: E0224 00:35:05.695665 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:35:14 crc kubenswrapper[4824]: I0224 00:35:14.783784 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-df47j_02a08fee-e933-4730-8755-7419c78d6525/prometheus-operator/0.log" Feb 24 00:35:14 crc kubenswrapper[4824]: I0224 00:35:14.918432 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s_7c2534a7-fe82-47e6-b6f9-2bf928bf7c9a/prometheus-operator-admission-webhook/0.log" Feb 24 00:35:14 crc kubenswrapper[4824]: I0224 00:35:14.977036 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf_350461e1-7bfd-4095-9d74-4c3df3159694/prometheus-operator-admission-webhook/0.log" Feb 24 00:35:15 crc kubenswrapper[4824]: I0224 00:35:15.112566 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-hhf7q_823099c2-9764-455a-a682-57c154c0d895/operator/0.log" Feb 24 00:35:15 crc kubenswrapper[4824]: I0224 00:35:15.198624 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-frbxc_885263fe-5a06-4089-b662-d3e4dbc7d08e/perses-operator/0.log" Feb 24 00:35:19 crc kubenswrapper[4824]: I0224 00:35:19.693806 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:35:19 crc kubenswrapper[4824]: E0224 00:35:19.694460 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:35:29 crc kubenswrapper[4824]: I0224 00:35:29.535247 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw_55bd419c-9f16-434a-9a7f-0693ab6601d4/util/0.log" Feb 24 00:35:29 crc kubenswrapper[4824]: I0224 00:35:29.781029 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw_55bd419c-9f16-434a-9a7f-0693ab6601d4/pull/0.log" Feb 24 00:35:29 crc kubenswrapper[4824]: I0224 00:35:29.800235 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw_55bd419c-9f16-434a-9a7f-0693ab6601d4/pull/0.log" Feb 24 00:35:29 crc kubenswrapper[4824]: I0224 00:35:29.834888 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw_55bd419c-9f16-434a-9a7f-0693ab6601d4/util/0.log" Feb 24 00:35:29 crc kubenswrapper[4824]: I0224 00:35:29.985009 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw_55bd419c-9f16-434a-9a7f-0693ab6601d4/util/0.log" Feb 24 00:35:29 crc kubenswrapper[4824]: I0224 00:35:29.985642 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw_55bd419c-9f16-434a-9a7f-0693ab6601d4/extract/0.log" Feb 24 00:35:29 crc kubenswrapper[4824]: I0224 00:35:29.993619 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17dtpw_55bd419c-9f16-434a-9a7f-0693ab6601d4/pull/0.log" Feb 24 00:35:30 crc kubenswrapper[4824]: I0224 00:35:30.159365 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf_7191d6cb-0051-4cd2-a93d-a26af6142eb8/util/0.log" Feb 24 00:35:30 crc kubenswrapper[4824]: I0224 00:35:30.387997 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf_7191d6cb-0051-4cd2-a93d-a26af6142eb8/pull/0.log" Feb 24 00:35:30 crc kubenswrapper[4824]: I0224 00:35:30.388009 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf_7191d6cb-0051-4cd2-a93d-a26af6142eb8/pull/0.log" Feb 24 00:35:30 crc kubenswrapper[4824]: I0224 00:35:30.417444 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf_7191d6cb-0051-4cd2-a93d-a26af6142eb8/util/0.log" Feb 24 00:35:30 crc kubenswrapper[4824]: I0224 00:35:30.552864 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf_7191d6cb-0051-4cd2-a93d-a26af6142eb8/util/0.log" Feb 24 00:35:30 crc kubenswrapper[4824]: I0224 00:35:30.570042 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf_7191d6cb-0051-4cd2-a93d-a26af6142eb8/pull/0.log" Feb 24 00:35:30 crc kubenswrapper[4824]: I0224 00:35:30.578870 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnmwrf_7191d6cb-0051-4cd2-a93d-a26af6142eb8/extract/0.log" Feb 24 00:35:30 crc kubenswrapper[4824]: I0224 00:35:30.767871 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq_379ee973-5632-434f-953c-7f23d7dc8f9d/util/0.log" Feb 24 00:35:30 crc kubenswrapper[4824]: I0224 00:35:30.995228 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq_379ee973-5632-434f-953c-7f23d7dc8f9d/util/0.log" Feb 24 00:35:30 crc kubenswrapper[4824]: I0224 00:35:30.999308 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq_379ee973-5632-434f-953c-7f23d7dc8f9d/pull/0.log" Feb 24 00:35:31 crc kubenswrapper[4824]: I0224 00:35:31.002007 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq_379ee973-5632-434f-953c-7f23d7dc8f9d/pull/0.log" Feb 24 00:35:31 crc kubenswrapper[4824]: I0224 00:35:31.170139 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq_379ee973-5632-434f-953c-7f23d7dc8f9d/util/0.log" Feb 24 00:35:31 crc kubenswrapper[4824]: I0224 00:35:31.227670 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq_379ee973-5632-434f-953c-7f23d7dc8f9d/pull/0.log" Feb 24 00:35:31 crc kubenswrapper[4824]: I0224 00:35:31.232761 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5jcmlq_379ee973-5632-434f-953c-7f23d7dc8f9d/extract/0.log" Feb 24 00:35:31 crc kubenswrapper[4824]: I0224 00:35:31.369891 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92_5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d/util/0.log" Feb 24 00:35:31 crc kubenswrapper[4824]: I0224 00:35:31.553392 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92_5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d/util/0.log" Feb 24 00:35:31 crc kubenswrapper[4824]: I0224 00:35:31.603109 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92_5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d/pull/0.log" Feb 24 00:35:31 crc kubenswrapper[4824]: I0224 00:35:31.608361 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92_5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d/pull/0.log" Feb 24 00:35:31 crc kubenswrapper[4824]: I0224 00:35:31.774015 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92_5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d/pull/0.log" Feb 24 00:35:31 crc kubenswrapper[4824]: I0224 00:35:31.779470 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92_5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d/util/0.log" Feb 24 00:35:31 crc kubenswrapper[4824]: I0224 00:35:31.788672 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08t2v92_5017dbd5-8ff7-4321-9ff3-db31cdbb5f8d/extract/0.log" Feb 24 00:35:31 crc kubenswrapper[4824]: I0224 00:35:31.967652 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9gq54_2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d/extract-utilities/0.log" Feb 24 00:35:32 crc kubenswrapper[4824]: I0224 00:35:32.127178 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9gq54_2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d/extract-content/0.log" Feb 24 00:35:32 crc kubenswrapper[4824]: I0224 00:35:32.132877 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9gq54_2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d/extract-utilities/0.log" Feb 24 00:35:32 crc kubenswrapper[4824]: I0224 00:35:32.137886 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9gq54_2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d/extract-content/0.log" Feb 24 00:35:32 crc kubenswrapper[4824]: I0224 00:35:32.325234 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9gq54_2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d/extract-utilities/0.log" Feb 24 00:35:32 crc kubenswrapper[4824]: I0224 00:35:32.356061 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9gq54_2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d/extract-content/0.log" Feb 24 00:35:32 crc kubenswrapper[4824]: I0224 00:35:32.574349 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9t5cw_9da3bd34-bc43-4c9d-a974-a131ad945913/extract-utilities/0.log" Feb 24 00:35:32 crc kubenswrapper[4824]: I0224 00:35:32.681032 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-9gq54_2d7ceac8-1cca-49dc-bff6-f6fa38cbfc1d/registry-server/0.log" Feb 24 00:35:32 crc kubenswrapper[4824]: I0224 00:35:32.769010 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9t5cw_9da3bd34-bc43-4c9d-a974-a131ad945913/extract-content/0.log" Feb 24 00:35:32 crc kubenswrapper[4824]: I0224 00:35:32.789969 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9t5cw_9da3bd34-bc43-4c9d-a974-a131ad945913/extract-utilities/0.log" Feb 24 00:35:32 crc kubenswrapper[4824]: I0224 00:35:32.800819 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9t5cw_9da3bd34-bc43-4c9d-a974-a131ad945913/extract-content/0.log" Feb 24 00:35:32 crc kubenswrapper[4824]: I0224 00:35:32.983169 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9t5cw_9da3bd34-bc43-4c9d-a974-a131ad945913/extract-utilities/0.log" Feb 24 00:35:32 crc kubenswrapper[4824]: I0224 00:35:32.992750 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9t5cw_9da3bd34-bc43-4c9d-a974-a131ad945913/extract-content/0.log" Feb 24 00:35:33 crc kubenswrapper[4824]: I0224 00:35:33.231724 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jqdmp_1c407e9b-e49e-46a5-8920-786aad1539fb/marketplace-operator/0.log" Feb 24 00:35:33 crc kubenswrapper[4824]: I0224 00:35:33.297441 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9t5cw_9da3bd34-bc43-4c9d-a974-a131ad945913/registry-server/0.log" Feb 24 00:35:33 crc kubenswrapper[4824]: I0224 00:35:33.368737 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2gh9t_ee751741-65c5-4db2-aa84-8c1e6868cf86/extract-utilities/0.log" Feb 24 00:35:33 crc kubenswrapper[4824]: I0224 00:35:33.475768 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2gh9t_ee751741-65c5-4db2-aa84-8c1e6868cf86/extract-utilities/0.log" Feb 24 00:35:33 crc kubenswrapper[4824]: I0224 00:35:33.513806 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2gh9t_ee751741-65c5-4db2-aa84-8c1e6868cf86/extract-content/0.log" Feb 24 00:35:33 crc kubenswrapper[4824]: I0224 00:35:33.536898 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2gh9t_ee751741-65c5-4db2-aa84-8c1e6868cf86/extract-content/0.log" Feb 24 00:35:33 crc kubenswrapper[4824]: I0224 00:35:33.654128 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2gh9t_ee751741-65c5-4db2-aa84-8c1e6868cf86/extract-utilities/0.log" Feb 24 00:35:33 crc kubenswrapper[4824]: I0224 00:35:33.676377 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2gh9t_ee751741-65c5-4db2-aa84-8c1e6868cf86/extract-content/0.log" Feb 24 00:35:33 crc kubenswrapper[4824]: I0224 00:35:33.693589 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:35:33 crc kubenswrapper[4824]: E0224 00:35:33.693835 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:35:33 crc kubenswrapper[4824]: I0224 00:35:33.924933 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2gh9t_ee751741-65c5-4db2-aa84-8c1e6868cf86/registry-server/0.log" Feb 24 00:35:46 crc kubenswrapper[4824]: I0224 00:35:46.444372 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-579b5ff79b-2qg2s_7c2534a7-fe82-47e6-b6f9-2bf928bf7c9a/prometheus-operator-admission-webhook/0.log" Feb 24 00:35:46 crc kubenswrapper[4824]: I0224 00:35:46.459540 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-df47j_02a08fee-e933-4730-8755-7419c78d6525/prometheus-operator/0.log" Feb 24 00:35:46 crc kubenswrapper[4824]: I0224 00:35:46.461424 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-579b5ff79b-nwsdf_350461e1-7bfd-4095-9d74-4c3df3159694/prometheus-operator-admission-webhook/0.log" Feb 24 00:35:46 crc kubenswrapper[4824]: I0224 00:35:46.619035 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-hhf7q_823099c2-9764-455a-a682-57c154c0d895/operator/0.log" Feb 24 00:35:46 crc kubenswrapper[4824]: I0224 00:35:46.632648 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-frbxc_885263fe-5a06-4089-b662-d3e4dbc7d08e/perses-operator/0.log" Feb 24 00:35:48 crc kubenswrapper[4824]: I0224 00:35:48.694018 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:35:48 crc kubenswrapper[4824]: E0224 00:35:48.694487 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:36:02 crc kubenswrapper[4824]: I0224 00:36:02.694907 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:36:02 crc kubenswrapper[4824]: E0224 00:36:02.696220 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:36:17 crc kubenswrapper[4824]: I0224 00:36:17.694137 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:36:17 crc kubenswrapper[4824]: E0224 00:36:17.695398 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:36:28 crc kubenswrapper[4824]: I0224 00:36:28.694488 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:36:28 crc kubenswrapper[4824]: E0224 00:36:28.695736 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:36:41 crc kubenswrapper[4824]: I0224 00:36:41.694051 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:36:41 crc kubenswrapper[4824]: E0224 00:36:41.695112 4824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vcbgn_openshift-machine-config-operator(939ca085-9383-42e6-b7d6-37f101137273)\"" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" Feb 24 00:36:46 crc kubenswrapper[4824]: I0224 00:36:46.106163 4824 generic.go:334] "Generic (PLEG): container finished" podID="30551e99-c0dc-480e-8aa4-cfb7df233fa5" containerID="be48c11ddee87f6376cb573bf02565d2ce45f4f7484d835a87096a80c130fad9" exitCode=0 Feb 24 00:36:46 crc kubenswrapper[4824]: I0224 00:36:46.106460 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4984/must-gather-t5kpz" event={"ID":"30551e99-c0dc-480e-8aa4-cfb7df233fa5","Type":"ContainerDied","Data":"be48c11ddee87f6376cb573bf02565d2ce45f4f7484d835a87096a80c130fad9"} Feb 24 00:36:46 crc kubenswrapper[4824]: I0224 00:36:46.107446 4824 scope.go:117] "RemoveContainer" containerID="be48c11ddee87f6376cb573bf02565d2ce45f4f7484d835a87096a80c130fad9" Feb 24 00:36:46 crc kubenswrapper[4824]: I0224 00:36:46.829775 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v4984_must-gather-t5kpz_30551e99-c0dc-480e-8aa4-cfb7df233fa5/gather/0.log" Feb 24 00:36:53 crc kubenswrapper[4824]: I0224 00:36:53.694022 4824 scope.go:117] "RemoveContainer" containerID="1fadecb8108bb82124c951985733cccd456bd51e1965c7fbca274984d7901364" Feb 24 00:36:53 crc kubenswrapper[4824]: I0224 00:36:53.883021 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v4984/must-gather-t5kpz"] Feb 24 00:36:53 crc kubenswrapper[4824]: I0224 00:36:53.883800 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-v4984/must-gather-t5kpz" podUID="30551e99-c0dc-480e-8aa4-cfb7df233fa5" containerName="copy" containerID="cri-o://c82f94371919f040f11f9a25c3e88f21683ea931dd570e2a2fd2964f6a6b29a8" gracePeriod=2 Feb 24 00:36:53 crc kubenswrapper[4824]: I0224 00:36:53.890474 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v4984/must-gather-t5kpz"] Feb 24 00:36:54 crc kubenswrapper[4824]: I0224 00:36:54.186132 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" event={"ID":"939ca085-9383-42e6-b7d6-37f101137273","Type":"ContainerStarted","Data":"2d6f8ce5501722862dc8ed78387b85d7725f9ecfe5b1eca0592c5b8a2bb70509"} Feb 24 00:36:54 crc kubenswrapper[4824]: I0224 00:36:54.188631 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v4984_must-gather-t5kpz_30551e99-c0dc-480e-8aa4-cfb7df233fa5/copy/0.log" Feb 24 00:36:54 crc kubenswrapper[4824]: I0224 00:36:54.189065 4824 generic.go:334] "Generic (PLEG): container finished" podID="30551e99-c0dc-480e-8aa4-cfb7df233fa5" containerID="c82f94371919f040f11f9a25c3e88f21683ea931dd570e2a2fd2964f6a6b29a8" exitCode=143 Feb 24 00:36:54 crc kubenswrapper[4824]: I0224 00:36:54.288107 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v4984_must-gather-t5kpz_30551e99-c0dc-480e-8aa4-cfb7df233fa5/copy/0.log" Feb 24 00:36:54 crc kubenswrapper[4824]: I0224 00:36:54.288613 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4984/must-gather-t5kpz" Feb 24 00:36:54 crc kubenswrapper[4824]: I0224 00:36:54.394365 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w677s\" (UniqueName: \"kubernetes.io/projected/30551e99-c0dc-480e-8aa4-cfb7df233fa5-kube-api-access-w677s\") pod \"30551e99-c0dc-480e-8aa4-cfb7df233fa5\" (UID: \"30551e99-c0dc-480e-8aa4-cfb7df233fa5\") " Feb 24 00:36:54 crc kubenswrapper[4824]: I0224 00:36:54.394474 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/30551e99-c0dc-480e-8aa4-cfb7df233fa5-must-gather-output\") pod \"30551e99-c0dc-480e-8aa4-cfb7df233fa5\" (UID: \"30551e99-c0dc-480e-8aa4-cfb7df233fa5\") " Feb 24 00:36:54 crc kubenswrapper[4824]: I0224 00:36:54.406850 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30551e99-c0dc-480e-8aa4-cfb7df233fa5-kube-api-access-w677s" (OuterVolumeSpecName: "kube-api-access-w677s") pod "30551e99-c0dc-480e-8aa4-cfb7df233fa5" (UID: "30551e99-c0dc-480e-8aa4-cfb7df233fa5"). InnerVolumeSpecName "kube-api-access-w677s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:36:54 crc kubenswrapper[4824]: I0224 00:36:54.451813 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30551e99-c0dc-480e-8aa4-cfb7df233fa5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "30551e99-c0dc-480e-8aa4-cfb7df233fa5" (UID: "30551e99-c0dc-480e-8aa4-cfb7df233fa5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:36:54 crc kubenswrapper[4824]: I0224 00:36:54.495807 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w677s\" (UniqueName: \"kubernetes.io/projected/30551e99-c0dc-480e-8aa4-cfb7df233fa5-kube-api-access-w677s\") on node \"crc\" DevicePath \"\"" Feb 24 00:36:54 crc kubenswrapper[4824]: I0224 00:36:54.496061 4824 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/30551e99-c0dc-480e-8aa4-cfb7df233fa5-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 24 00:36:54 crc kubenswrapper[4824]: I0224 00:36:54.704205 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30551e99-c0dc-480e-8aa4-cfb7df233fa5" path="/var/lib/kubelet/pods/30551e99-c0dc-480e-8aa4-cfb7df233fa5/volumes" Feb 24 00:36:55 crc kubenswrapper[4824]: I0224 00:36:55.197808 4824 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v4984_must-gather-t5kpz_30551e99-c0dc-480e-8aa4-cfb7df233fa5/copy/0.log" Feb 24 00:36:55 crc kubenswrapper[4824]: I0224 00:36:55.198443 4824 scope.go:117] "RemoveContainer" containerID="c82f94371919f040f11f9a25c3e88f21683ea931dd570e2a2fd2964f6a6b29a8" Feb 24 00:36:55 crc kubenswrapper[4824]: I0224 00:36:55.198564 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4984/must-gather-t5kpz" Feb 24 00:36:55 crc kubenswrapper[4824]: I0224 00:36:55.223725 4824 scope.go:117] "RemoveContainer" containerID="be48c11ddee87f6376cb573bf02565d2ce45f4f7484d835a87096a80c130fad9" Feb 24 00:37:13 crc kubenswrapper[4824]: I0224 00:37:13.177898 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dzb2d"] Feb 24 00:37:13 crc kubenswrapper[4824]: E0224 00:37:13.178874 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30551e99-c0dc-480e-8aa4-cfb7df233fa5" containerName="copy" Feb 24 00:37:13 crc kubenswrapper[4824]: I0224 00:37:13.178890 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="30551e99-c0dc-480e-8aa4-cfb7df233fa5" containerName="copy" Feb 24 00:37:13 crc kubenswrapper[4824]: E0224 00:37:13.178901 4824 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30551e99-c0dc-480e-8aa4-cfb7df233fa5" containerName="gather" Feb 24 00:37:13 crc kubenswrapper[4824]: I0224 00:37:13.178907 4824 state_mem.go:107] "Deleted CPUSet assignment" podUID="30551e99-c0dc-480e-8aa4-cfb7df233fa5" containerName="gather" Feb 24 00:37:13 crc kubenswrapper[4824]: I0224 00:37:13.179025 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="30551e99-c0dc-480e-8aa4-cfb7df233fa5" containerName="copy" Feb 24 00:37:13 crc kubenswrapper[4824]: I0224 00:37:13.179038 4824 memory_manager.go:354] "RemoveStaleState removing state" podUID="30551e99-c0dc-480e-8aa4-cfb7df233fa5" containerName="gather" Feb 24 00:37:13 crc kubenswrapper[4824]: I0224 00:37:13.179969 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dzb2d" Feb 24 00:37:13 crc kubenswrapper[4824]: I0224 00:37:13.197910 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dzb2d"] Feb 24 00:37:13 crc kubenswrapper[4824]: I0224 00:37:13.341555 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00f22c6-c1c6-448e-82da-b778d04a8c0f-utilities\") pod \"certified-operators-dzb2d\" (UID: \"c00f22c6-c1c6-448e-82da-b778d04a8c0f\") " pod="openshift-marketplace/certified-operators-dzb2d" Feb 24 00:37:13 crc kubenswrapper[4824]: I0224 00:37:13.341651 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00f22c6-c1c6-448e-82da-b778d04a8c0f-catalog-content\") pod \"certified-operators-dzb2d\" (UID: \"c00f22c6-c1c6-448e-82da-b778d04a8c0f\") " pod="openshift-marketplace/certified-operators-dzb2d" Feb 24 00:37:13 crc kubenswrapper[4824]: I0224 00:37:13.341679 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz22r\" (UniqueName: \"kubernetes.io/projected/c00f22c6-c1c6-448e-82da-b778d04a8c0f-kube-api-access-dz22r\") pod \"certified-operators-dzb2d\" (UID: \"c00f22c6-c1c6-448e-82da-b778d04a8c0f\") " pod="openshift-marketplace/certified-operators-dzb2d" Feb 24 00:37:13 crc kubenswrapper[4824]: I0224 00:37:13.443553 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00f22c6-c1c6-448e-82da-b778d04a8c0f-utilities\") pod \"certified-operators-dzb2d\" (UID: \"c00f22c6-c1c6-448e-82da-b778d04a8c0f\") " pod="openshift-marketplace/certified-operators-dzb2d" Feb 24 00:37:13 crc kubenswrapper[4824]: I0224 00:37:13.444086 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00f22c6-c1c6-448e-82da-b778d04a8c0f-catalog-content\") pod \"certified-operators-dzb2d\" (UID: \"c00f22c6-c1c6-448e-82da-b778d04a8c0f\") " pod="openshift-marketplace/certified-operators-dzb2d" Feb 24 00:37:13 crc kubenswrapper[4824]: I0224 00:37:13.444193 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz22r\" (UniqueName: \"kubernetes.io/projected/c00f22c6-c1c6-448e-82da-b778d04a8c0f-kube-api-access-dz22r\") pod \"certified-operators-dzb2d\" (UID: \"c00f22c6-c1c6-448e-82da-b778d04a8c0f\") " pod="openshift-marketplace/certified-operators-dzb2d" Feb 24 00:37:13 crc kubenswrapper[4824]: I0224 00:37:13.444341 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00f22c6-c1c6-448e-82da-b778d04a8c0f-utilities\") pod \"certified-operators-dzb2d\" (UID: \"c00f22c6-c1c6-448e-82da-b778d04a8c0f\") " pod="openshift-marketplace/certified-operators-dzb2d" Feb 24 00:37:13 crc kubenswrapper[4824]: I0224 00:37:13.444713 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00f22c6-c1c6-448e-82da-b778d04a8c0f-catalog-content\") pod \"certified-operators-dzb2d\" (UID: \"c00f22c6-c1c6-448e-82da-b778d04a8c0f\") " pod="openshift-marketplace/certified-operators-dzb2d" Feb 24 00:37:13 crc kubenswrapper[4824]: I0224 00:37:13.468139 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz22r\" (UniqueName: \"kubernetes.io/projected/c00f22c6-c1c6-448e-82da-b778d04a8c0f-kube-api-access-dz22r\") pod \"certified-operators-dzb2d\" (UID: \"c00f22c6-c1c6-448e-82da-b778d04a8c0f\") " pod="openshift-marketplace/certified-operators-dzb2d" Feb 24 00:37:13 crc kubenswrapper[4824]: I0224 00:37:13.510947 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dzb2d" Feb 24 00:37:13 crc kubenswrapper[4824]: I0224 00:37:13.793998 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dzb2d"] Feb 24 00:37:14 crc kubenswrapper[4824]: I0224 00:37:14.363580 4824 generic.go:334] "Generic (PLEG): container finished" podID="c00f22c6-c1c6-448e-82da-b778d04a8c0f" containerID="5e0ce036e3c72b781ea3b560ad5c6fd9d6bea62d8dd93a6cf8eb9b569edc50db" exitCode=0 Feb 24 00:37:14 crc kubenswrapper[4824]: I0224 00:37:14.363699 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzb2d" event={"ID":"c00f22c6-c1c6-448e-82da-b778d04a8c0f","Type":"ContainerDied","Data":"5e0ce036e3c72b781ea3b560ad5c6fd9d6bea62d8dd93a6cf8eb9b569edc50db"} Feb 24 00:37:14 crc kubenswrapper[4824]: I0224 00:37:14.364158 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzb2d" event={"ID":"c00f22c6-c1c6-448e-82da-b778d04a8c0f","Type":"ContainerStarted","Data":"8e1dc3a83619ae5d02e4c6a899175d5a9f386c93b9b7c498c85725fdec53ea8b"} Feb 24 00:37:14 crc kubenswrapper[4824]: I0224 00:37:14.366719 4824 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 00:37:15 crc kubenswrapper[4824]: I0224 00:37:15.375153 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzb2d" event={"ID":"c00f22c6-c1c6-448e-82da-b778d04a8c0f","Type":"ContainerStarted","Data":"974c7c06ffea10da3a5f8cd31e0d72834e82e9bfc1f63954f9c6d3d2b8c8db86"} Feb 24 00:37:15 crc kubenswrapper[4824]: I0224 00:37:15.380827 4824 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-plnc5"] Feb 24 00:37:15 crc kubenswrapper[4824]: I0224 00:37:15.382737 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-plnc5" Feb 24 00:37:15 crc kubenswrapper[4824]: I0224 00:37:15.400387 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-plnc5"] Feb 24 00:37:15 crc kubenswrapper[4824]: I0224 00:37:15.485055 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88nvc\" (UniqueName: \"kubernetes.io/projected/9d1ec7bf-1088-4425-8d7e-36112806bc0b-kube-api-access-88nvc\") pod \"redhat-operators-plnc5\" (UID: \"9d1ec7bf-1088-4425-8d7e-36112806bc0b\") " pod="openshift-marketplace/redhat-operators-plnc5" Feb 24 00:37:15 crc kubenswrapper[4824]: I0224 00:37:15.485184 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d1ec7bf-1088-4425-8d7e-36112806bc0b-catalog-content\") pod \"redhat-operators-plnc5\" (UID: \"9d1ec7bf-1088-4425-8d7e-36112806bc0b\") " pod="openshift-marketplace/redhat-operators-plnc5" Feb 24 00:37:15 crc kubenswrapper[4824]: I0224 00:37:15.485239 4824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d1ec7bf-1088-4425-8d7e-36112806bc0b-utilities\") pod \"redhat-operators-plnc5\" (UID: \"9d1ec7bf-1088-4425-8d7e-36112806bc0b\") " pod="openshift-marketplace/redhat-operators-plnc5" Feb 24 00:37:15 crc kubenswrapper[4824]: I0224 00:37:15.586400 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88nvc\" (UniqueName: \"kubernetes.io/projected/9d1ec7bf-1088-4425-8d7e-36112806bc0b-kube-api-access-88nvc\") pod \"redhat-operators-plnc5\" (UID: \"9d1ec7bf-1088-4425-8d7e-36112806bc0b\") " pod="openshift-marketplace/redhat-operators-plnc5" Feb 24 00:37:15 crc kubenswrapper[4824]: I0224 00:37:15.586484 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d1ec7bf-1088-4425-8d7e-36112806bc0b-catalog-content\") pod \"redhat-operators-plnc5\" (UID: \"9d1ec7bf-1088-4425-8d7e-36112806bc0b\") " pod="openshift-marketplace/redhat-operators-plnc5" Feb 24 00:37:15 crc kubenswrapper[4824]: I0224 00:37:15.586547 4824 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d1ec7bf-1088-4425-8d7e-36112806bc0b-utilities\") pod \"redhat-operators-plnc5\" (UID: \"9d1ec7bf-1088-4425-8d7e-36112806bc0b\") " pod="openshift-marketplace/redhat-operators-plnc5" Feb 24 00:37:15 crc kubenswrapper[4824]: I0224 00:37:15.587120 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d1ec7bf-1088-4425-8d7e-36112806bc0b-utilities\") pod \"redhat-operators-plnc5\" (UID: \"9d1ec7bf-1088-4425-8d7e-36112806bc0b\") " pod="openshift-marketplace/redhat-operators-plnc5" Feb 24 00:37:15 crc kubenswrapper[4824]: I0224 00:37:15.587370 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d1ec7bf-1088-4425-8d7e-36112806bc0b-catalog-content\") pod \"redhat-operators-plnc5\" (UID: \"9d1ec7bf-1088-4425-8d7e-36112806bc0b\") " pod="openshift-marketplace/redhat-operators-plnc5" Feb 24 00:37:15 crc kubenswrapper[4824]: I0224 00:37:15.611287 4824 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88nvc\" (UniqueName: \"kubernetes.io/projected/9d1ec7bf-1088-4425-8d7e-36112806bc0b-kube-api-access-88nvc\") pod \"redhat-operators-plnc5\" (UID: \"9d1ec7bf-1088-4425-8d7e-36112806bc0b\") " pod="openshift-marketplace/redhat-operators-plnc5" Feb 24 00:37:15 crc kubenswrapper[4824]: I0224 00:37:15.699603 4824 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-plnc5" Feb 24 00:37:15 crc kubenswrapper[4824]: I0224 00:37:15.949417 4824 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-plnc5"] Feb 24 00:37:16 crc kubenswrapper[4824]: I0224 00:37:16.386030 4824 generic.go:334] "Generic (PLEG): container finished" podID="c00f22c6-c1c6-448e-82da-b778d04a8c0f" containerID="974c7c06ffea10da3a5f8cd31e0d72834e82e9bfc1f63954f9c6d3d2b8c8db86" exitCode=0 Feb 24 00:37:16 crc kubenswrapper[4824]: I0224 00:37:16.386114 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzb2d" event={"ID":"c00f22c6-c1c6-448e-82da-b778d04a8c0f","Type":"ContainerDied","Data":"974c7c06ffea10da3a5f8cd31e0d72834e82e9bfc1f63954f9c6d3d2b8c8db86"} Feb 24 00:37:16 crc kubenswrapper[4824]: I0224 00:37:16.388688 4824 generic.go:334] "Generic (PLEG): container finished" podID="9d1ec7bf-1088-4425-8d7e-36112806bc0b" containerID="d932575d23aee6270236e4e02e8eac17b663f548742031e262b07026c08e0e00" exitCode=0 Feb 24 00:37:16 crc kubenswrapper[4824]: I0224 00:37:16.388719 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plnc5" event={"ID":"9d1ec7bf-1088-4425-8d7e-36112806bc0b","Type":"ContainerDied","Data":"d932575d23aee6270236e4e02e8eac17b663f548742031e262b07026c08e0e00"} Feb 24 00:37:16 crc kubenswrapper[4824]: I0224 00:37:16.388739 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plnc5" event={"ID":"9d1ec7bf-1088-4425-8d7e-36112806bc0b","Type":"ContainerStarted","Data":"012fe057c0077558a9201df0455b125e8df328cdf606f15f72a5d15a80b93c69"} Feb 24 00:37:17 crc kubenswrapper[4824]: I0224 00:37:17.398136 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plnc5" event={"ID":"9d1ec7bf-1088-4425-8d7e-36112806bc0b","Type":"ContainerStarted","Data":"f95f8b60bfb86d3d4d9d5f73769d2f2841e4de66351544100b23acaa7e1f0e20"} Feb 24 00:37:17 crc kubenswrapper[4824]: I0224 00:37:17.400573 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzb2d" event={"ID":"c00f22c6-c1c6-448e-82da-b778d04a8c0f","Type":"ContainerStarted","Data":"788a443efb38ec6ffead6b4c2bd4f5be9be2cbe38e8523898e0a39aa4e318f55"} Feb 24 00:37:17 crc kubenswrapper[4824]: I0224 00:37:17.452510 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dzb2d" podStartSLOduration=1.9647066359999998 podStartE2EDuration="4.452486678s" podCreationTimestamp="2026-02-24 00:37:13 +0000 UTC" firstStartedPulling="2026-02-24 00:37:14.366297694 +0000 UTC m=+1898.355922173" lastFinishedPulling="2026-02-24 00:37:16.854077746 +0000 UTC m=+1900.843702215" observedRunningTime="2026-02-24 00:37:17.451054402 +0000 UTC m=+1901.440678891" watchObservedRunningTime="2026-02-24 00:37:17.452486678 +0000 UTC m=+1901.442111147" Feb 24 00:37:18 crc kubenswrapper[4824]: I0224 00:37:18.414355 4824 generic.go:334] "Generic (PLEG): container finished" podID="9d1ec7bf-1088-4425-8d7e-36112806bc0b" containerID="f95f8b60bfb86d3d4d9d5f73769d2f2841e4de66351544100b23acaa7e1f0e20" exitCode=0 Feb 24 00:37:18 crc kubenswrapper[4824]: I0224 00:37:18.414650 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plnc5" event={"ID":"9d1ec7bf-1088-4425-8d7e-36112806bc0b","Type":"ContainerDied","Data":"f95f8b60bfb86d3d4d9d5f73769d2f2841e4de66351544100b23acaa7e1f0e20"} Feb 24 00:37:19 crc kubenswrapper[4824]: I0224 00:37:19.424511 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plnc5" event={"ID":"9d1ec7bf-1088-4425-8d7e-36112806bc0b","Type":"ContainerStarted","Data":"25c0739d92e36a44edd0b5aaf9c2a2dda811274cb63c527e398c4da941023b85"} Feb 24 00:37:19 crc kubenswrapper[4824]: I0224 00:37:19.452475 4824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-plnc5" podStartSLOduration=1.994357559 podStartE2EDuration="4.452447239s" podCreationTimestamp="2026-02-24 00:37:15 +0000 UTC" firstStartedPulling="2026-02-24 00:37:16.390113839 +0000 UTC m=+1900.379738308" lastFinishedPulling="2026-02-24 00:37:18.848203519 +0000 UTC m=+1902.837827988" observedRunningTime="2026-02-24 00:37:19.447289568 +0000 UTC m=+1903.436914057" watchObservedRunningTime="2026-02-24 00:37:19.452447239 +0000 UTC m=+1903.442071708" Feb 24 00:37:23 crc kubenswrapper[4824]: I0224 00:37:23.511887 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dzb2d" Feb 24 00:37:23 crc kubenswrapper[4824]: I0224 00:37:23.513497 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dzb2d" Feb 24 00:37:23 crc kubenswrapper[4824]: I0224 00:37:23.560500 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dzb2d" Feb 24 00:37:24 crc kubenswrapper[4824]: I0224 00:37:24.513915 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dzb2d" Feb 24 00:37:24 crc kubenswrapper[4824]: I0224 00:37:24.566902 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dzb2d"] Feb 24 00:37:25 crc kubenswrapper[4824]: I0224 00:37:25.700094 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-plnc5" Feb 24 00:37:25 crc kubenswrapper[4824]: I0224 00:37:25.700580 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-plnc5" Feb 24 00:37:25 crc kubenswrapper[4824]: I0224 00:37:25.803753 4824 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-plnc5" Feb 24 00:37:26 crc kubenswrapper[4824]: I0224 00:37:26.483489 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dzb2d" podUID="c00f22c6-c1c6-448e-82da-b778d04a8c0f" containerName="registry-server" containerID="cri-o://788a443efb38ec6ffead6b4c2bd4f5be9be2cbe38e8523898e0a39aa4e318f55" gracePeriod=2 Feb 24 00:37:26 crc kubenswrapper[4824]: I0224 00:37:26.531702 4824 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-plnc5" Feb 24 00:37:26 crc kubenswrapper[4824]: I0224 00:37:26.968134 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-plnc5"] Feb 24 00:37:27 crc kubenswrapper[4824]: I0224 00:37:27.955021 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dzb2d" Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.111747 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz22r\" (UniqueName: \"kubernetes.io/projected/c00f22c6-c1c6-448e-82da-b778d04a8c0f-kube-api-access-dz22r\") pod \"c00f22c6-c1c6-448e-82da-b778d04a8c0f\" (UID: \"c00f22c6-c1c6-448e-82da-b778d04a8c0f\") " Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.112400 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00f22c6-c1c6-448e-82da-b778d04a8c0f-utilities\") pod \"c00f22c6-c1c6-448e-82da-b778d04a8c0f\" (UID: \"c00f22c6-c1c6-448e-82da-b778d04a8c0f\") " Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.113321 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c00f22c6-c1c6-448e-82da-b778d04a8c0f-utilities" (OuterVolumeSpecName: "utilities") pod "c00f22c6-c1c6-448e-82da-b778d04a8c0f" (UID: "c00f22c6-c1c6-448e-82da-b778d04a8c0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.113475 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00f22c6-c1c6-448e-82da-b778d04a8c0f-catalog-content\") pod \"c00f22c6-c1c6-448e-82da-b778d04a8c0f\" (UID: \"c00f22c6-c1c6-448e-82da-b778d04a8c0f\") " Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.124609 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c00f22c6-c1c6-448e-82da-b778d04a8c0f-kube-api-access-dz22r" (OuterVolumeSpecName: "kube-api-access-dz22r") pod "c00f22c6-c1c6-448e-82da-b778d04a8c0f" (UID: "c00f22c6-c1c6-448e-82da-b778d04a8c0f"). InnerVolumeSpecName "kube-api-access-dz22r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.133429 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz22r\" (UniqueName: \"kubernetes.io/projected/c00f22c6-c1c6-448e-82da-b778d04a8c0f-kube-api-access-dz22r\") on node \"crc\" DevicePath \"\"" Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.133478 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c00f22c6-c1c6-448e-82da-b778d04a8c0f-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.187303 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c00f22c6-c1c6-448e-82da-b778d04a8c0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c00f22c6-c1c6-448e-82da-b778d04a8c0f" (UID: "c00f22c6-c1c6-448e-82da-b778d04a8c0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.234602 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c00f22c6-c1c6-448e-82da-b778d04a8c0f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.501879 4824 generic.go:334] "Generic (PLEG): container finished" podID="c00f22c6-c1c6-448e-82da-b778d04a8c0f" containerID="788a443efb38ec6ffead6b4c2bd4f5be9be2cbe38e8523898e0a39aa4e318f55" exitCode=0 Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.501965 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dzb2d" Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.501979 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzb2d" event={"ID":"c00f22c6-c1c6-448e-82da-b778d04a8c0f","Type":"ContainerDied","Data":"788a443efb38ec6ffead6b4c2bd4f5be9be2cbe38e8523898e0a39aa4e318f55"} Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.502021 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dzb2d" event={"ID":"c00f22c6-c1c6-448e-82da-b778d04a8c0f","Type":"ContainerDied","Data":"8e1dc3a83619ae5d02e4c6a899175d5a9f386c93b9b7c498c85725fdec53ea8b"} Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.502040 4824 scope.go:117] "RemoveContainer" containerID="788a443efb38ec6ffead6b4c2bd4f5be9be2cbe38e8523898e0a39aa4e318f55" Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.502378 4824 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-plnc5" podUID="9d1ec7bf-1088-4425-8d7e-36112806bc0b" containerName="registry-server" containerID="cri-o://25c0739d92e36a44edd0b5aaf9c2a2dda811274cb63c527e398c4da941023b85" gracePeriod=2 Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.557824 4824 scope.go:117] "RemoveContainer" containerID="974c7c06ffea10da3a5f8cd31e0d72834e82e9bfc1f63954f9c6d3d2b8c8db86" Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.568587 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dzb2d"] Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.579392 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dzb2d"] Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.590897 4824 scope.go:117] "RemoveContainer" containerID="5e0ce036e3c72b781ea3b560ad5c6fd9d6bea62d8dd93a6cf8eb9b569edc50db" Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.623911 4824 scope.go:117] "RemoveContainer" containerID="788a443efb38ec6ffead6b4c2bd4f5be9be2cbe38e8523898e0a39aa4e318f55" Feb 24 00:37:28 crc kubenswrapper[4824]: E0224 00:37:28.624464 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"788a443efb38ec6ffead6b4c2bd4f5be9be2cbe38e8523898e0a39aa4e318f55\": container with ID starting with 788a443efb38ec6ffead6b4c2bd4f5be9be2cbe38e8523898e0a39aa4e318f55 not found: ID does not exist" containerID="788a443efb38ec6ffead6b4c2bd4f5be9be2cbe38e8523898e0a39aa4e318f55" Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.624585 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"788a443efb38ec6ffead6b4c2bd4f5be9be2cbe38e8523898e0a39aa4e318f55"} err="failed to get container status \"788a443efb38ec6ffead6b4c2bd4f5be9be2cbe38e8523898e0a39aa4e318f55\": rpc error: code = NotFound desc = could not find container \"788a443efb38ec6ffead6b4c2bd4f5be9be2cbe38e8523898e0a39aa4e318f55\": container with ID starting with 788a443efb38ec6ffead6b4c2bd4f5be9be2cbe38e8523898e0a39aa4e318f55 not found: ID does not exist" Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.624624 4824 scope.go:117] "RemoveContainer" containerID="974c7c06ffea10da3a5f8cd31e0d72834e82e9bfc1f63954f9c6d3d2b8c8db86" Feb 24 00:37:28 crc kubenswrapper[4824]: E0224 00:37:28.625184 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"974c7c06ffea10da3a5f8cd31e0d72834e82e9bfc1f63954f9c6d3d2b8c8db86\": container with ID starting with 974c7c06ffea10da3a5f8cd31e0d72834e82e9bfc1f63954f9c6d3d2b8c8db86 not found: ID does not exist" containerID="974c7c06ffea10da3a5f8cd31e0d72834e82e9bfc1f63954f9c6d3d2b8c8db86" Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.625212 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"974c7c06ffea10da3a5f8cd31e0d72834e82e9bfc1f63954f9c6d3d2b8c8db86"} err="failed to get container status \"974c7c06ffea10da3a5f8cd31e0d72834e82e9bfc1f63954f9c6d3d2b8c8db86\": rpc error: code = NotFound desc = could not find container \"974c7c06ffea10da3a5f8cd31e0d72834e82e9bfc1f63954f9c6d3d2b8c8db86\": container with ID starting with 974c7c06ffea10da3a5f8cd31e0d72834e82e9bfc1f63954f9c6d3d2b8c8db86 not found: ID does not exist" Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.625230 4824 scope.go:117] "RemoveContainer" containerID="5e0ce036e3c72b781ea3b560ad5c6fd9d6bea62d8dd93a6cf8eb9b569edc50db" Feb 24 00:37:28 crc kubenswrapper[4824]: E0224 00:37:28.628535 4824 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e0ce036e3c72b781ea3b560ad5c6fd9d6bea62d8dd93a6cf8eb9b569edc50db\": container with ID starting with 5e0ce036e3c72b781ea3b560ad5c6fd9d6bea62d8dd93a6cf8eb9b569edc50db not found: ID does not exist" containerID="5e0ce036e3c72b781ea3b560ad5c6fd9d6bea62d8dd93a6cf8eb9b569edc50db" Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.628570 4824 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e0ce036e3c72b781ea3b560ad5c6fd9d6bea62d8dd93a6cf8eb9b569edc50db"} err="failed to get container status \"5e0ce036e3c72b781ea3b560ad5c6fd9d6bea62d8dd93a6cf8eb9b569edc50db\": rpc error: code = NotFound desc = could not find container \"5e0ce036e3c72b781ea3b560ad5c6fd9d6bea62d8dd93a6cf8eb9b569edc50db\": container with ID starting with 5e0ce036e3c72b781ea3b560ad5c6fd9d6bea62d8dd93a6cf8eb9b569edc50db not found: ID does not exist" Feb 24 00:37:28 crc kubenswrapper[4824]: I0224 00:37:28.703588 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c00f22c6-c1c6-448e-82da-b778d04a8c0f" path="/var/lib/kubelet/pods/c00f22c6-c1c6-448e-82da-b778d04a8c0f/volumes" Feb 24 00:37:30 crc kubenswrapper[4824]: I0224 00:37:30.527419 4824 generic.go:334] "Generic (PLEG): container finished" podID="9d1ec7bf-1088-4425-8d7e-36112806bc0b" containerID="25c0739d92e36a44edd0b5aaf9c2a2dda811274cb63c527e398c4da941023b85" exitCode=0 Feb 24 00:37:30 crc kubenswrapper[4824]: I0224 00:37:30.527578 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plnc5" event={"ID":"9d1ec7bf-1088-4425-8d7e-36112806bc0b","Type":"ContainerDied","Data":"25c0739d92e36a44edd0b5aaf9c2a2dda811274cb63c527e398c4da941023b85"} Feb 24 00:37:31 crc kubenswrapper[4824]: I0224 00:37:31.325244 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-plnc5" Feb 24 00:37:31 crc kubenswrapper[4824]: I0224 00:37:31.398393 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d1ec7bf-1088-4425-8d7e-36112806bc0b-catalog-content\") pod \"9d1ec7bf-1088-4425-8d7e-36112806bc0b\" (UID: \"9d1ec7bf-1088-4425-8d7e-36112806bc0b\") " Feb 24 00:37:31 crc kubenswrapper[4824]: I0224 00:37:31.398474 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88nvc\" (UniqueName: \"kubernetes.io/projected/9d1ec7bf-1088-4425-8d7e-36112806bc0b-kube-api-access-88nvc\") pod \"9d1ec7bf-1088-4425-8d7e-36112806bc0b\" (UID: \"9d1ec7bf-1088-4425-8d7e-36112806bc0b\") " Feb 24 00:37:31 crc kubenswrapper[4824]: I0224 00:37:31.398574 4824 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d1ec7bf-1088-4425-8d7e-36112806bc0b-utilities\") pod \"9d1ec7bf-1088-4425-8d7e-36112806bc0b\" (UID: \"9d1ec7bf-1088-4425-8d7e-36112806bc0b\") " Feb 24 00:37:31 crc kubenswrapper[4824]: I0224 00:37:31.399915 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d1ec7bf-1088-4425-8d7e-36112806bc0b-utilities" (OuterVolumeSpecName: "utilities") pod "9d1ec7bf-1088-4425-8d7e-36112806bc0b" (UID: "9d1ec7bf-1088-4425-8d7e-36112806bc0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:37:31 crc kubenswrapper[4824]: I0224 00:37:31.408124 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d1ec7bf-1088-4425-8d7e-36112806bc0b-kube-api-access-88nvc" (OuterVolumeSpecName: "kube-api-access-88nvc") pod "9d1ec7bf-1088-4425-8d7e-36112806bc0b" (UID: "9d1ec7bf-1088-4425-8d7e-36112806bc0b"). InnerVolumeSpecName "kube-api-access-88nvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:37:31 crc kubenswrapper[4824]: I0224 00:37:31.501125 4824 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88nvc\" (UniqueName: \"kubernetes.io/projected/9d1ec7bf-1088-4425-8d7e-36112806bc0b-kube-api-access-88nvc\") on node \"crc\" DevicePath \"\"" Feb 24 00:37:31 crc kubenswrapper[4824]: I0224 00:37:31.501173 4824 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d1ec7bf-1088-4425-8d7e-36112806bc0b-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:37:31 crc kubenswrapper[4824]: I0224 00:37:31.518171 4824 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d1ec7bf-1088-4425-8d7e-36112806bc0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d1ec7bf-1088-4425-8d7e-36112806bc0b" (UID: "9d1ec7bf-1088-4425-8d7e-36112806bc0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:37:31 crc kubenswrapper[4824]: I0224 00:37:31.543604 4824 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-plnc5" event={"ID":"9d1ec7bf-1088-4425-8d7e-36112806bc0b","Type":"ContainerDied","Data":"012fe057c0077558a9201df0455b125e8df328cdf606f15f72a5d15a80b93c69"} Feb 24 00:37:31 crc kubenswrapper[4824]: I0224 00:37:31.543692 4824 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-plnc5" Feb 24 00:37:31 crc kubenswrapper[4824]: I0224 00:37:31.543702 4824 scope.go:117] "RemoveContainer" containerID="25c0739d92e36a44edd0b5aaf9c2a2dda811274cb63c527e398c4da941023b85" Feb 24 00:37:31 crc kubenswrapper[4824]: I0224 00:37:31.577864 4824 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-plnc5"] Feb 24 00:37:31 crc kubenswrapper[4824]: I0224 00:37:31.584510 4824 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-plnc5"] Feb 24 00:37:31 crc kubenswrapper[4824]: I0224 00:37:31.588258 4824 scope.go:117] "RemoveContainer" containerID="f95f8b60bfb86d3d4d9d5f73769d2f2841e4de66351544100b23acaa7e1f0e20" Feb 24 00:37:31 crc kubenswrapper[4824]: I0224 00:37:31.602940 4824 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d1ec7bf-1088-4425-8d7e-36112806bc0b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:37:31 crc kubenswrapper[4824]: I0224 00:37:31.615610 4824 scope.go:117] "RemoveContainer" containerID="d932575d23aee6270236e4e02e8eac17b663f548742031e262b07026c08e0e00" Feb 24 00:37:32 crc kubenswrapper[4824]: I0224 00:37:32.705148 4824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d1ec7bf-1088-4425-8d7e-36112806bc0b" path="/var/lib/kubelet/pods/9d1ec7bf-1088-4425-8d7e-36112806bc0b/volumes" Feb 24 00:38:53 crc kubenswrapper[4824]: I0224 00:38:53.275956 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:38:53 crc kubenswrapper[4824]: I0224 00:38:53.276633 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:39:23 crc kubenswrapper[4824]: I0224 00:39:23.276646 4824 patch_prober.go:28] interesting pod/machine-config-daemon-vcbgn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:39:23 crc kubenswrapper[4824]: I0224 00:39:23.277461 4824 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vcbgn" podUID="939ca085-9383-42e6-b7d6-37f101137273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"